이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Block Storage and Volumes
The Block Storage service (openstack-cinder) manages the administration, security, scheduling, and overall management of all volumes. Volumes are used as the primary form of persistent storage for Compute instances.
2.1. Back Ends
By default, the Block Storage service uses an LVM back end as a repository for volumes. While this back end is suitable for test environments, we advise that you deploy a more robust back end for an Enterprise environment.
When deploying Red Hat OpenStack Platform for the environment, we recommand using the director. Doing so helps ensure the proper configuration of each service, including the Block Storage service (and, by extension, its back end). The director also has several integrated back end configurations.
Red Hat OpenStack Platform supports Red Hat Ceph and NFS as Block Storage back ends. For instructions on how to deploy Ceph with OpenStack, see Red Hat Ceph Storage for the Overcloud.
For instructions on how to set up NFS storage in the overcloud, see Configuring NFS Storage (from the Advanced Overcloud Customization guide).
Third-Party Storage Providers
You can also configure the Block Storage service to use supported third-party storage appliances. The director includes the necessary components for easily deploying the following:
- Dell EqualLogic
- Dell Storage Center
- NetApp (for supported appliances)
- Fujitsu Eternus is also supported as a back end, but is not yet integrated into the Director.
For a complete list of supported back end appliances and drivers, see Component, Plug-In, and Driver Support in RHEL OpenStack Platform.
2.2. Block Storage Service Administration
The following procedures explain how to configure the Block Storage service to suit your needs. All of these procedures require administrator privileges.
2.2.1. Group Volume Settings with Volume Types
OpenStack allows you to create volume types, which allows you apply the type’s associated settings. You can apply these settings during volume creation (]) or even afterwards (xref:section-volume-retype[). For example, you can associate:
- Whether or not a volume is encrypted (Section 2.2.5.1, “Configure Volume Type Encryption”)
- Which back end a volume should use (] and xref:section-volumes-advanced-migrate-backend[)
- Quality-of-Service (QoS) Specs
Settings are associated with volume types using key-value pairs called Extra Specs. When you specify a volume type during volume creation, the Block Storage scheduler applies these key/value pairs as settings. You can associate multiple key/value pairs to the same volume type.
Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key/value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type.
Available and supported Extra Specs vary per volume driver. Consult your volume driver’s documentation for a list of valid Extra Specs.
2.2.1.1. List a Host Driver’s Capabilities
Available and supported Extra Specs vary per back end driver. Consult the driver’s documentation for a list of valid Extra Specs.
Alternatively, you can query the Block Storage host directly to determine which well-defined standard Extra Specs are supported by its driver. Start by logging in (through the command line) to the node hosting the Block Storage service. Then:
# cinder service-list
This command will return a list containing the host of each Block Storage service (cinder-backup, cinder-scheduler, and cinder-volume). For example:
+------------------+---------------------------+------+---------
| Binary | Host | Zone | Status ...
+------------------+---------------------------+------+---------
| cinder-backup | localhost.localdomain | nova | enabled ...
| cinder-scheduler | localhost.localdomain | nova | enabled ...
| cinder-volume | localhost.localdomain@lvm | nova | enabled ...
+------------------+---------------------------+------+---------
To display the driver capabilities (and, in turn, determine the supported Extra Specs) of a Block Storage service, run:
# cinder get-capabilities VOLSVCHOST
Where VOLSVCHOST is the complete name of the cinder-volume's host. For example:
# cinder get-capabilities localhost.localdomain@lvm +---------------------+-----------------------------------------+ | Volume stats | Value | +---------------------+-----------------------------------------+ | description | None | | display_name | None | | driver_version | 3.0.0 | | namespace | OS::Storage::Capabilities::localhost.loc... | pool_name | None | | storage_protocol | iSCSI | | vendor_name | Open Source | | visibility | None | | volume_backend_name | lvm | +---------------------+-----------------------------------------+ +--------------------+------------------------------------------+ | Backend properties | Value | +--------------------+------------------------------------------+ | compression | {u'type': u'boolean', u'description'... | qos | {u'type': u'boolean', u'des ... | replication | {u'type': u'boolean', u'description'... | thin_provisioning | {u'type': u'boolean', u'description': u'S... +--------------------+------------------------------------------+
The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values.
2.2.1.2. Create and Configure a Volume Type
- As an admin user in the dashboard, select Admin > Volumes > Volume Types.
- Click Create Volume Type.
- Enter the volume type name in the Name field.
- Click Create Volume Type. The new type appears in the Volume Types table.
- Select the volume type’s View Extra Specs action.
- Click Create, and specify the Key and Value. The key/value pair must be valid; otherwise, specifying the volume type during volume creation will result in an error.
- Click Create. The associated setting (key/value pair) now appears in the Extra Specs table.
By default, all volume types are accessible to all OpenStack tenants. If you need to create volume types with restricted access, you will need to do so through the CLI. For instructions, see Section 2.2.1.5, “Create and Configure Private Volume Types”.
You can also associate a QOS Spec to the volume type. For details, refer to Section 2.2.4.2, “Associate a QOS Spec with a Volume Type”.
2.2.1.3. Edit a Volume Type
- As an admin user in the dashboard, select Admin > Volumes > Volume Types.
- In the Volume Types table, select the volume type’s View Extra Specs action.
On the Extra Specs table of this page, you can:
- Add a new setting to the volume type. To do this, click Create, and specify the key/value pair of the new setting you want to associate to the volume type.
- Edit an existing setting associated with the volume type. To do this, select the setting’s Edit action.
- Delete existing settings associated with the volume type. To do this, select the extra specs' check box and click Delete Extra Specs in this and the next dialog screen.
2.2.1.4. Delete a Volume Type
To delete a volume type, select its corresponding check boxes from the Volume Types table and click Delete Volume Types.
2.2.1.5. Create and Configure Private Volume Types
By default, all volume types are visible to all tenants. You can override this during volume type creation and set it to private. To do so, you will need to set the type’s Is_Public
flag to False
.
Private volume types are useful for restricting access to certain volume settings. Typically, these are settings that should only be usable by specific tenants; examples include new back ends or ultra-high performance configurations that are being tested.
To create a private volume type, run:
# cinder --os-volume-api-version 2 type-create --is-public false VTYPE
+ Replace VTYPE with the name of the private volume type.
By default, private volume types are only accessible to their creators. However, admin users can find and view private volume types using the following command:
# cinder --os-volume-api-version 2 type-list --all
This command will list both public and private volume types, and will also include the name and ID of each one. You will need the volume type’s ID to provide access to it.
Access to a private volume type is granted at the tenant level. To grant a tenant access to a private volume type, run:
# cinder --os-volume-api-version 2 type-access-add --volume-type VTYPEID --project-id TENANTID
Where:
- VTYPEID is the ID of the private volume type.
- TENANTID is the ID of the project/tenant you are granting access to VTYPEID.
To view which tenants have access to a private volume type, run:
# cinder --os-volume-api-version 2 type-access-list --volume-type VTYPE
To remove a tenant from the access list of a private volume type, run:
# cinder --os-volume-api-version 2 type-access-remove --volume-type VTYPE --project-id TENANTID
By default, only users with administrative privileges can create, view, or configure access for private volume types.
2.2.2. Create and Configure an Internal Tenant for the Block Storage Service
Some Block Storage features (for example, the Image-Volume cache) require the configuration of an internal tenant. The Block Storage service uses this tenant/project to manage block storage items that do not necessarily need to be exposed to normal users. Examples of such items are images cached for frequent volume cloning or temporary copies of volumes being migrated.
To configure an internal project, first create a generic project and user, both named cinder-internal. To do so, log in to the Controller node and run:
# openstack project create --enable --description "Block Storage Internal Tenant" cinder-internal +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Block Storage Internal Tenant | | enabled | True | | id | cb91e1fe446a45628bb2b139d7dccaef | | name | cinder-internal | +-------------+----------------------------------+ # openstack user create --project cinder-internal cinder-internal +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | None | | enabled | True | | id | 84e9672c64f041d6bfa7a930f558d946 | | name | cinder-internal | |project_id| cb91e1fe446a45628bb2b139d7dccaef | | username | cinder-internal | +----------+----------------------------------+
Note that creating the project and user will display their respective IDs. Configure the Block Storage service to use both project and user as the internal project through their IDs. To do so, run the following on each Block Storage node:
# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id TENANTID # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id USERID
Replace TENANTID and USERID with the respective IDs of the cinder-internal
project and user, which you created earlier. For example, using the IDs supplied above:
# crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_project_id cb91e1fe446a45628bb2b139d7dccaef # crudini --set /etc/cinder/cinder.conf DEFAULT cinder_internal_tenant_user_id 84e9672c64f041d6bfa7a930f558d946
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.3. Configure and Enable the Image-Volume Cache
The Block Storage service features an optional Image-Volume cache which can be used when creating volumes from images. This cache is designed to improve the speed of volume creation from frequently-used images. For information on how to create volumes from images, see Section 2.3.1, “Create a Volume”.
When enabled, the Image-Volume cache stores a copy of an image the first time a volume is created from it. This stored image is cached locally to the Block Storage back end to help improve performance the next time the image is used to create a volume. The Image-Volume cache’s limit can be set to a size (in GB), number of images, or both.
The Image-Volume cache is supported by several back ends. If you are using a third-party back end, refer to its documentation for information on Image-Volume cache support.
The Image-Volume cache requires that an internal tenant be configured for the Block Storage service. For instructions, see Section 2.2.2, “Create and Configure an Internal Tenant for the Block Storage Service”.
To enable and configure the Image-Volume cache on a back end (BACKEND), run the following commands:
# crudini --set /etc/cinder/cinder.conf BACKEND image_volume_cache_enabled True
Replace BACKEND with the name of the target back end (specifically, its volume_backend_name value).
By default, the Image-Volume cache size is only limited by the back end. To configure a maximum size (MAXSIZE, in GB):
# crudini --set /etc/cinder/cinder.conf BACKEND image_volume_cache_max_size_gb MAXSIZE
Alternatively, you can also set a maximum number of images (MAXNUMBER). To do so:
# crudini --set /etc/cinder/cinder.conf BACKEND image_volume_cache_max_count MAXNUMBER
The Block Storage service database uses a time stamp to track when each cached image was last used to create an image. If either or both MAXSIZE and MAXNUMBER are set, the Block Storage service will delete cached images as needed to make way for new ones. Cached images with the oldest time stamp are deleted first whenever the Image-Volume cache limits are met.
After configuring the Image-Volume cache, restart the Block Storage service. To do so, log in to any Controller node as the heat-admin
user and run:
# pcs resource restart openstack-cinder-volume
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.4. Use Quality-of-Service Specifications
You can map multiple performance settings to a single Quality-of-Service specification (QOS Specs). Doing so allows you to provide performance tiers for different user types.
Performance settings are mapped as key/value pairs to QOS Specs, similar to the way volume settings are associated to a volume type. However, QOS Specs are different from volume types in the following respects:
QOS Specs are used to apply performance settings, which include limiting read/write operations to disks. Available and supported performance settings vary per storage driver.
To determine which QOS Specs are supported by your back end, consult the documentation of your back end device’s volume driver.
- Volume types are directly applied to volumes, whereas QOS Specs are not. Rather, QOS Specs are associated to volume types. During volume creation, specifying a volume type also applies the performance settings mapped to the volume type’s associated QOS Specs.
2.2.4.1. Create and Configure a QOS Spec
As an administrator, you can create and configure a QOS Spec through the QOS Specs table. You can associate more than one key/value pair to the same QOS Spec.
- As an admin user in the dashboard, select Admin > Volumes > Volume Types.
- On the QOS Specs table, click Create QOS Spec.
- Enter a name for the QOS Spec.
In the Consumer field, specify where the QOS policy should be enforced:
Table 2.1. Consumer Types Type Description back-end
QOS policy will be applied to the Block Storage back end.
front-end
QOS policy will be applied to Compute.
both
QOS policy will be applied to both Block Storage and Compute.
- Click Create. The new QOS Spec should now appear in the QOS Specs table.
- In the QOS Specs table, select the new spec’s Manage Specs action.
- Click Create, and specify the Key and Value. The key/value pair must be valid; otherwise, specifying a volume type associated with this QOS Spec during volume creation will fail.
- Click Create. The associated setting (key/value pair) now appears in the Key-Value Pairs table.
2.2.4.2. Associate a QOS Spec with a Volume Type
As an administrator, you can associate a QOS Spec to an existing volume type using the Volume Types table.
- As an administrator in the dashboard, select Admin > Volumes > Volume Types.
- In the Volume Types table, select the type’s Manage QOS Spec Association action.
- Select a QOS Spec from the QOS Spec to be associated list.
- Click Associate. The selected QOS Spec now appears in the Associated QOS Spec column of the edited volume type.
2.2.4.3. Disassociate a QOS Spec from a Volume Type
- As an administrator in the dashboard, select Admin > Volumes > Volume Types.
- In the Volume Types table, select the type’s Manage QOS Spec Association action.
- Select None from the QOS Spec to be associated list.
- Click Associate. The selected QOS Spec is no longer in the Associated QOS Spec column of the edited volume type.
2.2.5. Configure Volume Encryption
Volume encryption helps provide basic data protection in case the volume back-end is either compromised or outright stolen. Both Compute and Block Storage services are integrated to allow instances to read access and use encrypted volumes.
At present, volume encryption is only supported on volumes backed by block devices. Encryption of network-attached volumes (such as RBD) or file-based volumes (such as NFS) is still unsupported.
Volume encryption is applied through volume type. See Section 2.2.5.1, “Configure Volume Type Encryption” for information on encrypted volume types.
2.2.5.1. Configure Volume Type Encryption
To create encrypted volumes, you first need an encrypted volume type. Encrypting a volume type involves setting what provider class, cipher, and key size it should use:
- As an admin user in the dashboard, select Admin > Volumes > Volume Types.
- In the Actions column of the volume to be encrypted, select Create Encryption. Doing so will launch the Create Volume Type Encryption wizard.
From there, configure the Provider, Control Location, Cipher, and Key Size settings of the volume type’s encryption. The Description column describes each setting.
ImportantAt present, the only supported Provider is LuksEncryptor, while the only supported Cipher is aes-xts-plain64.
- Click Create Volume Type Encryption.
Once you have an encrypted volume type, you can invoke it to automatically create encrypted volumes. For more information on creating a volume type, see ]. Specifically, select the encrypted volume type from the Type drop-down list in the Create Volume window (see to xref:section-volumes_basic[).
You can also re-configure the encryption settings of an encrypted volume type. To do so, select Update Encryption from the Actions column of the volume type. Doing so will launch the Update Volume Type Encryption wizard.
In Project > Compute > Volumes, the Encrypted column in the Volumes table indicates whether the volume is encrypted. If it is, you can click Yes in that column to view the encryption settings.
2.2.6. Configure How Volumes are Allocated to Multiple Back Ends
If the Block Storage service is configured to use multiple back ends, you can use configured volume types to specify where a volume should be created. For details, see Section 2.3.2, “Specify Back End for Volume Creation”.
The Block Storage service will automatically choose a back end if you do not specify one during volume creation. Block Storage sets the first defined back end as a default; this back end will be used until it runs out of space. At that point, Block Storage will set the second defined back end as a default, and so on.
If this is not suitable for your needs, you can use the filter scheduler to control how Block Storage should select back ends. This scheduler can use different filters to triage suitable back ends, such as:
- AvailabilityZoneFilter
- Filters out all back ends that do not meet the availability zone requirements of the requested volume.
- CapacityFilter
- Selects only back ends with enough space to accommodate the volume.
- CapabilitiesFilter
- Selects only back ends that can support any specified settings in the volume.
- InstanceLocality
- Configures clusters to use volumes local to the same node (when the OpenStack Data Processing service is enabled)
To configure the filter scheduler, add an environment file to your deployment containing:
parameter_defaults: ControllerExtraConfig: # 1 cinder::config::cinder_config: DEFAULT/scheduler_default_filters: value: 'AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,InstanceLocality'
- 1
- You can also add the
ControllerExtraConfig:
hook and its nested sections to theparameter_defaults:
section of an existing environment file.
Alternatively, you can also manually configure the filter scheduler. To do so:
-
Log in as
heat-admin
to the node hosting the Block Storage service. Enable the
FilterScheduler
scheduler driver.$ sudo crudini --set /etc/cinder/cinder.conf DEFAULT scheduler_driver cinder.scheduler.filter_scheduler.FilterScheduler
Set which filters should be active:
$ sudo crudini --set /etc/cinder/cinder.conf DEFAULT scheduler_default_filters AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter
Configure how the scheduler should select a suitable back end. If you want the scheduler:
To always choose the back end with the most available free space, run:
$ sudo crudini --set /etc/cinder/cinder.conf DEFAULT scheduler_default_weighers AllocatedCapacityWeigher $ sudo crudini --set /etc/cinder/cinder.conf DEFAULT allocated_capacity_weight_multiplier -1.0
To choose randomly among all suitable back ends, run:
$ crudini --set /etc/cinder/cinder.conf DEFAULT scheduler_default_weighers ChanceWeigher
Restart the Block Storage scheduler to apply your settings. To do so, log in to any Controller node as the
heat-admin
user and run:$ pcs resource restart openstack-cinder-scheduler
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.7. Configure and Use Consistency Groups
The Block Storage service allows you to set consistency groups. With this, you can group multiple volumes together as a single entity. This, in turn, allows you to perform operations on multiple volumes at once, rather than individually. Specifically, this release allows you to use consistency groups to create snapshots for multiple volumes simultaneously. By extension, this will also allow you to restore or clone those volumes simultaneously.
A volume may be a member of multiple consistency groups. However, you cannot delete, retype, or migrate volumes once you add them to a consistency group.
As of this release, consistency groups are only supported by the drivers of the following storage back ends:
- EMC VMAX
- EMC VNX
- EMC ScaleIO
- EMC ExtremIO
- HP 3Par StorServ
- IBM DS8000
- IBM StorwizeSVC
- IBM XIV
- NetApp Data ONTAP
- NetApp ESERIES
- NetApp SolidFire
2.2.7.1. Set Up Consistency Groups
By default, Block Storage security policy disables consistency groups APIs. You need to enable it here before using the feature. To do so, edit the related consistency group entries in /etc/cinder/policy.json of the node hosting the Block Storage API service (namely, openstack-cinder-api). The entries appear as follows:
"consistencygroup:create" : "group:nobody", "consistencygroup:delete": "group:nobody", "consistencygroup:update": "group:nobody", "consistencygroup:get": "group:nobody", "consistencygroup:get_all": "group:nobody", "consistencygroup:create_cgsnapshot" : "group:nobody", "consistencygroup:delete_cgsnapshot": "group:nobody", "consistencygroup:get_cgsnapshot": "group:nobody", "consistencygroup:get_all_cgsnapshots": "group:nobody",
For increased security, set the permissions for both consistency group API and volume type management API be identical. The volume type management API is set to "rule:admin_or_owner" by default (in the same /etc/cinder/policy.json file):
"volume_extension:types_manage": "rule:admin_or_owner",
So, to enable the consistency group APIs as recommended, edit their entries as follows:
"consistencygroup:create" : "rule:admin_api", "consistencygroup:delete": "rule:admin_api", "consistencygroup:update": "rule:admin_api", "consistencygroup:get": "rule:admin_api", "consistencygroup:get_all": "rule:admin_api", "consistencygroup:create_cgsnapshot" : "rule:admin_api", "consistencygroup:delete_cgsnapshot": "rule:admin_api", "consistencygroup:get_cgsnapshot": "rule:admin_api", "consistencygroup:get_all_cgsnapshots": "rule:admin_api",
You can also make the consistency groups feature available to all users. To do so, set the API policy entries to allow users to create, use, and manage their own concistency groups. To do so, use rule:admin_or_owner:
"consistencygroup:create" : "rule:admin_or_owner", "consistencygroup:delete": "rule:admin_or_owner", "consistencygroup:update": "rule:admin_or_owner", "consistencygroup:get": "rule:admin_or_owner", "consistencygroup:get_all": "rule:admin_or_owner", "consistencygroup:create_cgsnapshot" : "rule:admin_or_owner", "consistencygroup:delete_cgsnapshot": "rule:admin_or_owner", "consistencygroup:get_cgsnapshot": "rule:admin_or_owner", "consistencygroup:get_all_cgsnapshots": "rule:admin_or_owner",
After enabling the consistency group APIs, restart the Block Storage API service. To do so, log in to any Controller node as the heat-admin
user and run:
# pcs resource restart openstack-cinder-api
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.7.2. Create and Manage Consistency Groups
After enabling the consistency groups API, you can then start creating consistency groups. To do so:
- As an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups.
- Click Create Consistency Group.
- In the Consistency Group Information tab of the wizard, enter a name and description for your consistency group. Then, specify its Availability Zone.
- You can also add volume types to your consistency group. When you create volumes within the consistency group, the Block Storage service will apply compatible settings from those volume types. To add a volume type, click its + button from the All available volume types list.
- Click Create Consistency Group. It should appear next in the Volume Consistency Groups table.
You can change the name or description of a consistency group by selecting Edit Consistency Group from its Action column.
In addition, you can also add or remove volumes from a consistency group directly. To do so:
- As an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups.
Find the consistency group you want to configure. In the Actions column of that consistency group, select Manage Volumes. Doing so will launch the Add/Remove Consistency Group Volumes wizard.
- To add a volume to the consistency group, click its + button from the All available volumes list.
- To remove a volume from the consistency group, click its - button from the Selected volumes list.
- Click Edit Consistency Group.
2.2.7.3. Create and Manage Consistency Group Snapshots
After adding volumes to a consistency group, you can now create snapshots from it. Before doing so, first log in as admin user from the command line on the node hosting the openstack-cinder-api and run:
# export OS_VOLUME_API_VERSION=2
Doing so will configure the client to use version 2 of openstack-cinder-api.
To list all available consistency groups (along with their respective IDs, which you will need later):
# cinder consisgroup-list
To create snapshots using the consistency group, run:
# cinder cgsnapshot-create --name CGSNAPNAME --description "DESCRIPTION" CGNAMEID
Where:
- CGSNAPNAME is the name of the snapshot (optional).
- DESCRIPTION is a description of the snapshot (optional).
- CGNAMEID is the name or ID of the consistency group.
To display a list of all available consistency group snapshots, run:
# cinder cgsnapshot-list
2.2.7.4. Clone Consistency Groups
Consistency groups can also be used to create a whole batch of pre-configured volumes simultaneously. You can do this by cloning an existing consistency group or restoring a consistency group snapshot. Both processes use the same command.
To clone an existing consistency group:
# cinder consisgroup-create-from-src --source-cg CGNAMEID --name CGNAME --description "DESCRIPTION"
Where: - CGNAMEID is the name or ID of the consistency group you want to clone. - CGNAME is the name of your consistency group (optional). - DESCRIPTION is a description of your consistency group (optional).
To create a consistency group from a consistency group snapshot:
# cinder consisgroup-create-from-src --cgsnapshot CGSNAPNAME --name CGNAME --description "DESCRIPTION"
Replace CGSNAPNAME with the name or ID of the snapshot you are using to create the consistency group.
2.2.8. Backup Administration
The following sections discuss how to customize the Block Storage service’s volume backup settings.
2.2.8.1. View and Modify a Tenant’s Backup Quota
Normally, you can use the dashboard to modify tenant storage quotas, for example, the number of volumes, volume storage, snapshots, or other operational limits that a tenant can have. However, the functionality to modify backup quotas with the dashboard is not yet available.
You must use the command-line interface to modify backup quotas with the cinder quota-update
command.
To view the storage quotas of a specific tenant (TENANT_ID), run:
# cinder quota-show TENANT_ID
To update the maximum number of backups (MAXNUM) that can be created in a specific tenant, run:
# cinder quota-update --backups MAXNUM TENANT_ID
To update the maximum total size of all backups (MAXGB) within a specific tenant, run:
# cinder quota-update --backup-gigabytes MAXGB TENANT_ID
To view the storage quota usage of a specific tenant, run:
# cinder quota-usage TENANT_ID
2.2.8.2. Enable Volume Backup Management Through the Dashboard
You can now create, view, delete, and restore volume backups through the dashboard. To perform any of these functions, go to the Project > Compute > Volumes > Volume Backups tab.
However, the Volume Backups tab is not enabled by default. To enable it, configure the dashboard accordingly:
-
Open
/etc/openstack-dashboard/local_settings
. Search for the following setting:
OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, }
Change this setting to:
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }
Restart the dashboard by restarting the
httpd
service:# systemctl restart httpd.service
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.8.3. Set an NFS Share as a Backup Repository
By default, the Block Storage service uses the Object Storage service as a repository for backups. You can configure the Block Storage service to use an existing NFS share as a backup repository instead. To do so:
- Log in to the node hosting the backup service (openstack-cinder-backup) as a user with administrative privileges.
Configure the Block Storage service to use the NFS backup driver (cinder.backup.drivers.nfs):
# crudini --set /etc/cinder/cinder.conf DEFAULT backup_driver cinder.backup.drivers.nfs
Set the details of the NFS share that you want to use as a backup repository:
# crudini --set /etc/cinder/cinder.conf DEFAULT backup_share NFSHOST:PATH
Where:
- NFSHOST is the IP address or hostname of the NFS server.
- PATH is the absolute path of the NFS share on NFSHOST.
If you want to set any optional mount settings for the NFS share, run:
# crudini --set /etc/cinder/cinder.conf DEFAULT backup_mount_options NFSMOUNTOPTS
Where NFSMOUNTOPTS is a comma-separated list of NFS mount options (for example, rw,sync). For more information on supported mount options, see the man pages for nfs and mount.
Restart the Block Storage backup service to apply your changes:
# systemctl restart openstack-cinder-backup.service
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.2.8.3.1. Set a Different Backup File Size
The backup service limits backup files sizes to a maximum backup file size. If you are backing up a volume that exceeds this size, the resulting backup will be split into multiple chunks. The default backup file size is 1.8GB.
To set a different backup file size, run:
# crudini --set /etc/cinder/cinder.conf DEFAULT backup_file_size SIZE
Replace SIZE with the file size you want, in bytes. Restart the Block Storage backup service to apply your changes:
# systemctl restart openstack-cinder-backup.service
This procedure involves configuring a service outside of the director. As such, you may need to repeat it the next time you re-deploy or update the overcloud.
2.3. Basic Volume Usage and Configuration
The following procedures describe how to perform basic end-user volume management. These procedures do not require administrative privileges.
2.3.1. Create a Volume
- In the dashboard, select Project > Compute > Volumes.
Click Create Volume, and edit the following fields:
Field Description Volume name
Name of the volume.
Description
Optional, short description of the volume.
Type
Optional volume type (see Section 2.2.1, “Group Volume Settings with Volume Types”).
If you have multiple Block Storage back ends, you can use this to select a specific back end. See Section 2.3.2, “Specify Back End for Volume Creation” for details.
Size (GB)
Volume size (in gigabytes).
Availability Zone
Availability zones (logical server groups), along with host aggregates, are a common method for segregating resources within OpenStack. Availability zones are defined during installation. For more information on availability zones and host aggregates, see Manage Host Aggregates.
Specify a Volume Source:
Source Description No source, empty volume
The volume will be empty, and will not contain a file system or partition table.
Snapshot
Use an existing snapshot as a volume source. If you select this option, a new Use snapshot as a source list appears; you can then choose a snapshot from the list. For more information about volume snapshots, refer to Section 2.3.8, “Create, Use, or Delete Volume Snapshots”.
Image
Use an existing image as a volume source. If you select this option, a new Use image as a source lists appears; you can then choose an image from the list.
Volume
Use an existing volume as a volume source. If you select this option, a new Use volume as a source list appears; you can then choose a volume from the list.
- Click Create Volume. After the volume is created, its name appears in the Volumes table.
You can also change the volume’s type later on. For details, see Section 2.3.10, “Changing a Volume’s Type (Volume Re-typing)”.
2.3.2. Specify Back End for Volume Creation
Whenever multiple Block Storage back ends are configured, you will also need to create a volume type for each back end. You can then use the type to specify which back end should be used for a created volume. For more information about volume types, see Section 2.2.1, “Group Volume Settings with Volume Types”.
To specify a back end when creating a volume, select its corresponding volume type from the Type drop-down list (see Section 2.3.1, “Create a Volume”).
If you do not specify a back end during volume creation, the Block Storage service will automatically choose one for you. By default, the service will choose the back end with the most available free space. You can also configure the Block Storage service to choose randomly among all available back ends instead; for more information, see Section 2.2.6, “Configure How Volumes are Allocated to Multiple Back Ends”.
2.3.3. Edit a Volume’s Name or Description
- In the dashboard, select Project > Compute > Volumes.
- Select the volume’s Edit Volume button.
- Edit the volume name or description as required.
- Click Edit Volume to save your changes.
To create an encrypted volume, you must first have a volume type configured specifically for volume encryption. In addition, both Compute and Block Storage services must be configured to use the same static key. For information on how to set up the requirements for volume encryption, refer to Section 2.2.5, “Configure Volume Encryption”.
2.3.4. Delete a Volume
- In the dashboard, select Project > Compute > Volumes.
- In the Volumes table, select the volume to delete.
- Click Delete Volumes.
A volume cannot be deleted if it has existing snapshots. For instructions on how to delete snapshots, see Section 2.3.8, “Create, Use, or Delete Volume Snapshots”.
2.3.5. Attach and Detach a Volume to an Instance
Instances can use a volume for persistent storage. A volume can only be attached to one instance at a time. For more information on instances, see Manage Instances in the Instances and Images Guide available at Red Hat OpenStack Platform.
2.3.5.1. Attach a Volume to an Instance
- In the dashboard, select Project > Compute > Volumes.
- Select the volume’s Edit Attachments action. If the volume is not attached to an instance, the Attach To Instance drop-down list is visible.
- From the Attach To Instance list, select the instance to which you wish to attach the volume.
- Click Attach Volume.
2.3.5.2. Detach a Volume From an Instance
- In the dashboard, select Project > Compute > Volumes.
- Select the volume’s Manage Attachments action. If the volume is attached to an instance, the instance’s name is displayed in the Attachments table.
- Click Detach Volume in this and the next dialog screen.
2.3.6. Set a Volume to Read-Only
You can give multiple users shared access to a single volume without allowing them to edit its contents. To do so, set the volume to read-only
using the following command:
# cinder readonly-mode-update VOLUME true
Replace VOLUME
with the ID
of the target volume.
To set a read-only volume back to read-write, run:
# cinder readonly-mode-update VOLUME false
2.3.7. Change a Volume’s Owner
To change a volume’s owner, you will have to perform a volume transfer. A volume transfer is initiated by the volume’s owner, and the volume’s change in ownership is complete after the transfer is accepted by the volume’s new owner.
2.3.7.1. Transfer a Volume from the Command Line
- Log in as the volume’s current owner.
List the available volumes:
# cinder list
Initiate the volume transfer:
# cinder transfer-create VOLUME
Where
VOLUME
is the name orID
of the volume you wish to transfer. For example,+------------+--------------------------------------+ | Property | Value | +------------+--------------------------------------+ | auth_key | f03bf51ce7ead189 | | created_at | 2014-12-08T03:46:31.884066 | | id | 3f5dc551-c675-4205-a13a-d30f88527490 | | name | None | | volume_id | bcf7d015-4843-464c-880d-7376851ca728 | +------------+--------------------------------------+
The
cinder transfer-create
command clears the ownership of the volume and creates anid
andauth_key
for the transfer. These values can be given to, and used by, another user to accept the transfer and become the new owner of the volume.The new user can now claim ownership of the volume. To do so, the user should first log in from the command line and run:
# cinder transfer-accept TRANSFERID TRANSFERKEY
Where
TRANSFERID
andTRANSFERKEY
are theid
andauth_key
values returned by thecinder transfer-create
command, respectively. For example,# cinder transfer-accept 3f5dc551-c675-4205-a13a-d30f88527490 f03bf51ce7ead189
You can view all available volume transfers using:
# cinder transfer-list
2.3.7.2. Transfer a Volume Using the Dashboard
Create a volume transfer from the dashboard
- As the volume owner in the dashboard, select Projects > Volumes.
- In the Actions column of the volume to transfer, select Create Transfer.
In the Create Transfer dialog box, enter a name for the transfer and click Create Volume Transfer.
The volume transfer is created and in the Volume Transfer screen you can capture the
transfer ID
and theauthorization key
to send to the recipient project.NoteThe authorization key is available only in the Volume Transfer screen. If you lose the authorization key, you must cancel the transfer and create another transfer to generate a new authorization key.
Close the Volume Transfer screen to return to the volume list.
The volume status changes to
awaiting-transfer
until the recipient project accepts the transfer
Accept a volume transfer from the dashboard
- As the recipient project owner in the dashboard, select Projects > Volumes.
- Click Accept Transfer.
In the Accept Volume Transfer dialog box, enter the
transfer ID
and theauthorization key
that you received from the volume owner and click Accept Volume Transfer.The volume now appears in the volume list for the active project.
2.3.8. Create, Use, or Delete Volume Snapshots
You can preserve a volume’s state at a specific point in time by creating a volume snapshot. You can then use the snapshot to clone new volumes.
Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. In addition, you cannot delete a volume if it has existing snapshots. Volume backups are used to prevent data loss, whereas snapshots are used to facilitate cloning.
For this reason, snapshot back ends are typically co-located with volume back ends in order to minimize latency during cloning. By contrast, a backup repository is usually located in a different location (eg. different node, physical storage, or even geographical location) in a typical enterprise deployment. This is to protect the backup repository from any damage that might occur to the volume back end.
For more information about volume backups, refer to Section 2.4.1, “Back Up and Restore a Volume”
To create a volume snapshot:
- In the dashboard, select Project > Compute > Volumes.
- Select the target volume’s Create Snapshot action.
- Provide a Snapshot Name for the snapshot and click Create a Volume Snapshot. The Volume Snapshots tab displays all snapshots.
You can clone new volumes from a snapshot once it appears in the Volume Snapshots table. To do so, select the snapshot’s Create Volume action. For more information about volume creation, see Section 2.3.1, “Create a Volume”.
To delete a snapshot, select its Delete Volume Snapshot action.
If your OpenStack deployment uses a Red Hat Ceph back end, see Section 2.3.8.1, “Protected and Unprotected Snapshots in a Red Hat Ceph Back End” for more information on snapshot security and troubleshooting.
2.3.8.1. Protected and Unprotected Snapshots in a Red Hat Ceph Back End
When using Red Hat Ceph as a back end for your OpenStack deployment, you can set a snapshot to protected in the back end. Attempting to delete protected snapshots through OpenStack (as in, through the dashboard or the cinder snapshot-delete
command) will fail.
When this occurs, set the snapshot to unprotected in the Red Hat Ceph back end first. Afterwards, you should be able to delete the snapshot through OpenStack as normal.
For related instructions, see Protecting a Snapshot and Unprotecting a Snapshot.
2.3.9. Upload a Volume to the Image Service
You can upload an existing volume as an image to the Image service directly. To do so:
- In the dashboard, select Project > Compute > Volumes.
- Select the target volume’s Upload to Image action.
- Provide an Image Name for the volume and select a Disk Format from the list.
- Click Upload. The QEMU disk image utility uploads a new image of the chosen format using the name you provided.
To view the uploaded image, select Project > Compute > Images. The new image appears in the Images table. For information on how to use and configure images, see Manage Images in the Instances and Images Guide available at Red Hat OpenStack Platform.
2.3.10. Changing a Volume’s Type (Volume Re-typing)
Volume re-typing is the process of applying a volume type (and, in turn, its settings) to an already existing volume. For more information about volume types, see Section 2.2.1, “Group Volume Settings with Volume Types”.
A volume can be re-typed whether or not it has an existing volume type. In either case, a re-type will only be successful if the Extra Specs of the volume type can be applied to the volume. Volume re-typing is useful for applying pre-defined settings or storage attributes to an existing volume, such as when you want to:
- Migrate the volume to a different back end (Section 2.4.2.2, “Migrate Between Back Ends”).
- Change the volume’s storage class/tier.
Users with no administrative privileges can only re-type volumes they own. To perform a volume re-type:
- In the dashboard, select Project > Compute > Volumes.
- In the Actions column of the volume to be migrated, select Change Volume Type.
In the Change Volume Type dialog, select the target volume type defining the new back end from the Type drop-down list.
NoteIf you are migrating the volume to another back end, select On Demand from the Migration Policy drop-down list. For more information, see Section 2.4.2.2, “Migrate Between Back Ends”.
- Click Change Volume Type to start the migration.
2.4. Advanced Volume Configuration
The following procedures describe how to perform advanced volume management procedures.
2.4.1. Back Up and Restore a Volume
A volume backup is a persistent copy of a volume’s contents. Volume backups are typically created as object stores, and are managed through the Object Storage service by default. You can, however, set up a different repository for your backups; OpenStack supports Red Hat Ceph and NFS as alternative back ends for backups.
When creating a volume backup, all of the backup’s metadata is stored in the Block Storage service’s database. The cinder
utility uses this metadata when restoring a volume from the backup. As such, when recovering from a catastrophic database loss, you must restore the Block Storage service’s database first before restoring any volumes from backups. This also presumes that the Block Storage service database is being restored with all the original volume backup metadata intact.
If you wish to configure only a subset of volume backups to survive a catastrophic database loss, you can also export the backup’s metadata. In doing so, you can then re-import the metadata to the Block Storage database later on, and restore the volume backup as normal.
Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. In addition, you cannot delete a volume if it has existing snapshots. Volume backups are used to prevent data loss, whereas snapshots are used to facilitate cloning.
For this reason, snapshot back ends are typically co-located with volume back ends in order to minimize latency during cloning. By contrast, a backup repository is usually located in a different location (eg. different node, physical storage, or even geographical location) in a typical enterprise deployment. This is to protect the backup repository from any damage that might occur to the volume back end.
For more information about volume snapshots, refer to Section 2.3.8, “Create, Use, or Delete Volume Snapshots”.
2.4.1.1. Create a Full Volume Backup
To back up a volume, use the cinder backup-create
command. By default, this command will create a full backup of the volume. If the volume has existing backups, you can choose to create an incremental backup instead (see Section 2.4.1.2, “Create an Incremental Volume Backup” for details.)
You can create backups of volumes you have access to. This means that users with administrative privileges can back up any volume, regardless of owner. For more information, see Section 2.4.1.1.1, “Create a Volume Backup as an Admin”.
View the
ID
orDisplay Name
of the volume you wish to back up:# cinder list
Back up the volume:
# cinder backup-create VOLUME
Replace VOLUME with the
ID
orDisplay Name
of the volume you want to back up. For example:+-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | e9d15fc7-eeae-4ca4-aa72-d52536dc551d | | name | None | | volume_id | 5f75430a-abff-4cc7-b74e-f808234fa6c5 | +-----------+--------------------------------------+
NoteThe
volume_id
of the resulting backup is identical to theID
of the source volume.Verify that the volume backup creation is complete:
# cinder backup-list
The volume backup creation is complete when the
Status
of the backup entry isavailable
.
At this point, you can also export and store the volume backup’s metadata. This allows you to restore the volume backup, even if the Block Storage database suffers a catastrophic loss. To do so, run:
# cinder --os-volume-api-version 2 backup-export BACKUPID
Where BACKUPID is the ID or name of the volume backup. For example,
+----------------+------------------------------------------+ | Property | Value | +----------------+------------------------------------------+ | backup_service | cinder.backup.drivers.swift | | backup_url | eyJzdGF0dXMiOiAiYXZhaWxhYmxlIiwgIm9iam...| | | ...4NS02ZmY4MzBhZWYwNWUiLCAic2l6ZSI6IDF9 | +----------------+------------------------------------------+
The volume backup metadata consists of the backup_service
and backup_url
values.
2.4.1.1.1. Create a Volume Backup as an Admin
Users with administrative privileges (such as the default admin
account) can back up any volume managed by OpenStack. When an admin backs up a volume owned by a non-admin user, the backup is hidden from the volume owner by default.
As an admin, you can still back up a volume and make the backup available to a specific tenant. To do so, run:
# cinder --os-auth-url KEYSTONEURL --os-tenant-name TENANTNAME --os-username USERNAME --os-password PASSWD backup-create VOLUME
Where:
- TENANTNAME is the name of the tenant where you want to make the backup available.
- USERNAME and PASSWD are the username/password credentials of a user within TENANTNAME.
- VOLUME is the name or ID of the volume you want to back up.
- KEYSTONEURL is the URL endpoint of the Identity service (typically http://IP:5000/v2, where IP is the IP address of the Identity service host).
When performing this operation, the resulting backup’s size will count against the quota of TENANTNAME rather than the admin’s tenant.
2.4.1.2. Create an Incremental Volume Backup
By default, the cinder backup-create
command will create a full backup of a volume. However, if the volume has existing backups, you can choose to create an incremental backup.
An incremental backup captures any changes to the volume since the last backup (full or incremental). Performing numerous, regular, full back ups of a volume can become resource-intensive as the volume’s size increases over time. In this regard, incremental backups allow you to capture periodic changes to volumes while minimizing resource usage.
To create an incremental volume backup, use the --incremental
option. As in:
# cinder backup-create VOLUME --incremental
Replace VOLUME with the ID
or Display Name
of the volume you want to back up. Incremental backups are fully supported on NFS and Object Storage backup repositories.
You cannot delete a full backup if it already has an incremental backup. In addition, if a full backup has multiple incremental backups, you can only delete the latest one.
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning. This is a known issue (BZ#1463059).
2.4.1.3. Restore a Volume After a Block Storage Database Loss
Typically, a Block Storage database loss prevents you from restoring a volume backup. This is because the Block Storage database contains metadata required by the volume backup service (openstack-cinder-backup). This metadata consists of backup_service
and backup_url
values, which you can export after creating the volume backup (as shown in Section 2.4.1.1, “Create a Full Volume Backup”).
If you exported and stored this metadata, then you can import it to a new Block Storage database (thereby allowing you to restore the volume backup).
As a user with administrative privileges, run:
# cinder --os-volume-api-version 2 backup-import backup_service backup_url
Where backup_service and backup_url are from the metadata you exported. For example, using the exported metadata from Section 2.4.1.1, “Create a Full Volume Backup”:
# cinder --os-volume-api-version 2 backup-import cinder.backup.drivers.swift eyJzdGF0dXMi...c2l6ZSI6IDF9 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | id | 77951e2f-4aff-4365-8c64-f833802eaa43 | | name | None | +----------+--------------------------------------+
- After the metadata is imported into the Block Storage service database, you can restore the volume as normal (see Section 2.4.1.4, “Restore a Volume from a Backup”).
2.4.1.4. Restore a Volume from a Backup
Find the
ID
of the volume backup you wish to use:# cinder backup-list
The
Volume ID
should match the ID of the volume you wish to restore.Restore the volume backup:
# cinder backup-restore BACKUP_ID
Where BACKUP_ID is the ID of the volume backup you wish to use.
If you no longer need the backup, delete it:
# cinder backup-delete BACKUP_ID
2.4.2. Migrate a Volume
The Block Storage service allows you to migrate volumes between hosts or back ends. Volume migration has some limitations:
- The volume can not be in-use (attached to an instance) or have snapshots.
- The target of the in-use volume migration requires ISCSI block-backed devices and can not use non-block devices, such as Ceph RADOS Block Device (RBD).
- Migrations between volumes on different back ends (and thus drivers) are not optimized.
The speed of any migration depends upon your host setup and configuration. With driver-assisted migration, the data movement takes place at the storage backplane instead of inside of the OpenStack Block Storage service. Otherwise, data is copied from one host to another through the Block Storage service.
2.4.2.1. Migrate Between Hosts
When migrating a volume between hosts, both hosts must reside on the same back end. Use the dashboard to perform the migration:
- In the dashboard, select Admin > Volumes.
- In the Actions column of the volume to be migrated, select Migrate Volume.
In the Migrate Volume dialog, select the target host from the Destination Host drop-down list.
NoteIf you wish to bypass any driver optimizations for the host migration, select the Force Host Copy checkbox.
- Click Migrate to start the migration.
2.4.2.2. Migrate Between Back Ends
Migrating a volume between back ends, on the other hand, involves volume re-typing. This means that in order to migrate to a new back end:
- The new back end must be specified as an Extra Spec in a separate target volume type.
- All other Extra Specs defined in the target volume type must be compatible with the volume’s original volume type.
See ] and xref:section-specify-backend[ for more details.
When defining the back end as an Extra Spec, use volume_backend_name as the Key. Its corresponding value will be the back end’s name, as defined in the Block Storage configuration file (/etc/cinder/cinder.conf). In this file, each back end is defined in its own section, and its corresponding name is set in the volume_backend_name parameter.
Once you have a back end defined in a target volume type, you can migrate a volume to that back end through re-typing. This involves re-applying the target volume type to a volume, thereby applying the new back end settings. See Section 2.3.10, “Changing a Volume’s Type (Volume Re-typing)” for instructions.
To do so:
- In the dashboard, select Project > Compute > Volumes.
- In the Actions column of the volume to be migrated, select Change Volume Type.
- In the Change Volume Type dialog, select the target volume type defining the new back end from the Type drop-down list.
- Select On Demand from the Migration Policy drop-down list.
- Click Change Volume Type to start the migration.