Chapter 2. Configuring the Block Storage service (cinder)


The Block Storage service (cinder) manages the administration, security, scheduling, and overall management of all volumes. Volumes are used as the primary form of persistent storage for Compute instances.

For more information about volume backups, see the Block Storage Backup Guide.

Important

You must install host bus adapters (HBAs) on all Controller nodes and Compute nodes in any deployment that uses the Block Storage service and a Fibre Channel (FC) back end.

Block Storage is configured using the Block Storage REST API.

Note

Ensure that you are using Block Storage REST API version 3 because Block Storage no longer supports version 2. The default overcloud deployment does this for you by setting the environment variable OS_VOLUME_API_VERSION=3.0.

The Block Storage REST API preserves backward compatibility by using microversions to add enhancements. The cinder CLI uses the REST API version of 3.0, unless you specify a specific microversion. For instance, to specify the 3.17 microversion for a cinder command, add the --os-volume-api-version 3.17 argument.

Note

The openstack CLI can only use the Block Storage REST API version of 3.0 because it does not support these microversions.

2.1. Block Storage service back ends

Red Hat OpenStack Platform (RHOSP) is deployed using director. Doing so helps ensure the correct configuration of each service, including the Block Storage service (cinder) and, by extension, its back end. Director also has several integrated back-end configurations.

RHOSP supports Red Hat Ceph Storage and NFS as Block Storage service back ends. By default, the Block Storage service uses an LVM back end as a repository for volumes. While this back end is suitable for test environments, LVM is not supported in production environments.

For instructions on how to deploy Red Hat Ceph Storage with RHOSP, see Deploying Red Hat Ceph Storage and OpenStack Platform together with director.

You can also configure the Block Storage service to use supported third-party storage appliances. Director includes the necessary components for deploying different back-end solutions.

For a complete list of supported Block Storage service back-end appliances and drivers, see Cinder in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform. All third-party back-end appliances and drivers have additional deployment guides. Review the appropriate deployment guide to determine if a back-end appliance or driver requires a plugin.

If you configured Block Storage to use multiple back ends, you must create a volume type for each back end. If you do not specify a back end when creating the volume, the Block Storage scheduler uses filters to select suitable back ends. For more information, see Configuring the default Block Storage scheduler filters.

2.2. Active-active Block Storage for high availability

In active-passive mode, if the Block Storage service fails in a hyperconverged deployment, node fencing is undesirable. This is because node fencing can trigger storage to be rebalanced unnecessarily. Edge sites do not deploy Pacemaker, although Pacemaker is still present at the control site. Instead, edge sites deploy the Block Storage service in an active-active configuration to support highly available hyperconverged deployments.

Active-active deployments improve scaling, performance, and reduce response time by balancing workloads across all available nodes. Deploying the Block Storage service in an active-active configuration creates a highly available environment that maintains the management layer during partial network outages and single- or multi-node hardware failures. Active-active deployments allow a cluster to continue providing Block Storage services during a node outage.

Active-active deployments do not, however, enable workflows to resume automatically. If a service stops, individual operations running on the failed node will also fail during the outage. In this situation, confirm that the service is down and initiate a cleanup of resources that had in-flight operations.

2.2.1. Enabling active-active Block Storage

The cinder-volume-active-active.yaml file enables you to deploy the Block Storage service in an active-active configuration. This file ensures director uses the non-Pacemaker cinder-volume heat template and adds the etcd service to the deployment as a distributed lock manager (DLM).

The cinder-volume-active-active.yaml file also defines the active-active cluster name by assigning a value to the CinderVolumeCluster parameter. CinderVolumeCluster is a global Block Storage parameter. Therefore, you cannot include clustered (active-active) and non-clustered back ends in the same deployment.

Important

Currently, active-active configuration for Block Storage works only with Ceph RADOS Block Device (RBD) back ends. If you plan to use multiple back ends, all back ends must support the active-active configuration. If a back end that does not support the active-active configuration is included in the deployment, that back end will not be available for storage. In an active-active deployment, you risk data loss if you save data on a back end that does not support the active-active configuration.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. To enable active-active Block Storage service volumes, add this environment file to the stack with your other environment files and deploy the overcloud:

    /usr/share/openstack-tripleo-heat-templates/environments/cinder-volume-active-active.yaml

2.2.2. Maintenance commands for active-active Block Storage configurations

After deploying an active-active Block Storage configuration, you can use the following commands to manage the clusters and their services.

Note

These commands need a Block Storage (cinder) REST API microversion of 3.17 or later.

User goal

Command

To see detailed information about all the services, such as: binary, host, zone, status, state, cluster, disabled reason, and the cluster name.

cinder --os-volume-api-version 3.17 service-list

To see detailed information about all the clusters, such as: name, binary, state, and status.

NOTE: When deployed by director for the Ceph back end, the default cluster name is tripleo@tripleo_ceph.

cinder --os-volume-api-version 3.17 cluster-list

To see detailed information about a specific clustered service.

cinder --os-volume-api-version 3.17 cluster-show <cluster_name>

To enable a clustered service.

cinder --os-volume-api-version 3.17 cluster-enable <cluster_name>

To disable a clustered service.

cinder — os-volume-api-version 3.17 cluster-disable <cluster_name>

2.2.3. Volume manage and unmanage

The unmanage and manage mechanisms facilitate moving volumes from one service using version X to another service using version X+1. Both services remain running during this process.

In Block Storage (cinder) REST API microversion 3.17 or later, you can list the volumes and snapshots that can be managed in Block Storage clusters. To see these lists, use the --cluster argument with cinder manageable-list or cinder snapshot-manageable-list.

In Block Storage REST API microversion 3.16 and later, you can use the optional --cluster argument of the cinder manage command to add unmanaged volumes to a Block Storage cluster.

2.2.4. Volume migration on a clustered service

With Block Storage (cinder) REST API microversion 3.16 and later, the cinder migrate and cinder-manage commands use the --cluster argument to define the destination for active-active deployments.

When you migrate a volume on a Block Storage clustered service, use the optional --cluster argument and omit the host positional argument, because these arguments are mutually exclusive.

2.2.5. Initiating Block Storage service maintenance

All Block Storage volume services perform their own maintenance when they start.

In an environment with multiple volume services grouped in a cluster, you can clean up services that are not currently running.

The command work-cleanup triggers server cleanups. The command returns:

  • A list of the services that the command can clean.
  • A list of the services that the command cannot clean because they are not currently running in the cluster.

Prerequisites

  • You must be a project administrator to initiate Block Storage service maintenance.
  • Block Storage (cinder) REST API microversion 3.24 or later.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Run the following command to verify whether all of the services for a cluster are running:

    $ cinder cluster-list --detailed

    Alternatively, run the cluster show command.

  3. If any services are not running, run the following command to identify those specific services:

    $ cinder service-list
  4. Run the following command to trigger the server cleanup:

    $ cinder --os-volume-api-version 3.24 work-cleanup [--cluster <cluster-name>] [--host <hostname>] [--binary <binary>] [--is-up <True|true|False|false>] [--disabled <True|true|False|false>] [--resource-id <resource-id>] [--resource-type <Volume|Snapshot>]
    Note

    Filters, such as --cluster, --host, and --binary, define what the command cleans. You can filter on cluster name, host name, type of service, and resource type, including a specific resource. If you do not apply filtering, the command attempts to clean everything that can be cleaned.

    The following example filters by cluster name:

    $ cinder --os-volume-api-version 3.24 work-cleanup --cluster tripleo@tripleo_ceph

2.3. Group volume configuration with volume types

With Red Hat OpenStack Platform you can create volume types so that you can apply associated settings to each volume type. You can assign the required volume type before and after you create a volume. For more information, see Creating Block Storage volumes and Block Storage volume retyping. The following list shows some of the associated settings that you can apply to a volume type:

Settings are associated with volume types using key-value pairs called Extra Specs. When you specify a volume type during volume creation, the Block Storage scheduler applies these key-value pairs as settings. You can associate multiple key-value pairs to the same volume type.

You can create volume types to provide different levels of performance for your cloud users:

  • Add specific performance, resilience, and other Extra Specs as key-value pairs to each volume type.
  • Associate different lists of QoS performance limits or QoS specifications to your volume types.

When your users create their volumes, they can select the appropriate volume type that fulfills their performance requirements.

If you create a volume and do not specify a volume type, then Block Storage uses the default volume type. You can use the Block Storage (cinder) configuration file to define the general default volume type that applies to all your projects (tenants). But if your deployment uses project-specific volume types, ensure that you define default volume types for each project. In this case, Block Storage uses the project-specific volume type instead of the general default volume type. For more information, see Defining a project-specific default volume type.

2.3.1. Listing back-end driver properties

The properties associated with volume types use key-value pairs called Extra Specs. Each volume type back-end driver supports their own set of Extra Specs. For more information on which Extra Specs a driver supports, see the back-end driver documentation.

Alternatively, you can query the Block Storage host directly to list the well-defined standard Extra Specs of its back-end driver.

Prerequisites

  • You must be a project administrator to query the Block Storage host directly.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Determine the host of cinder-volume:

    $ cinder service-list

    This command will return a list containing the host of each Block Storage service (cinder-backup, cinder-scheduler, and cinder-volume). For example:

    +------------------+---------------------------+------+---------
    |      Binary      |            Host           | Zone |  Status ...
    +------------------+---------------------------+------+---------
    |  cinder-backup   |   localhost.localdomain   | nova | enabled ...
    | cinder-scheduler |   localhost.localdomain   | nova | enabled ...
    |  cinder-volume   | *localhost.localdomain@lvm* | nova | enabled ...
    +------------------+---------------------------+------+---------
  3. Display the driver capabilities to determine the supported Extra Specs of a Block Storage service:

    $ cinder get-capabilities <volsvchost>
    • Replace <volsvchost> with the host of cinder-volume. For example:

      $ cinder get-capabilities localhost.localdomain@lvm
          +---------------------+-----------------------------------------+
          |     Volume stats    |                        Value            |
          +---------------------+-----------------------------------------+
          |     description     |                         None            |
          |     display_name    |                         None            |
          |    driver_version   |                        3.0.0            |
          |      namespace      | OS::Storage::Capabilities::localhost.loc...
          |      pool_name      |                         None            |
          |   storage_protocol  |                        iSCSI            |
          |     vendor_name     |                     Open Source         |
          |      visibility     |                         None            |
          | volume_backend_name |                         lvm             |
          +---------------------+-----------------------------------------+
          +--------------------+------------------------------------------+
          | Backend properties |                        Value             |
          +--------------------+------------------------------------------+
          |    compression     |      {u'type': u'boolean', u'description'...
          |        qos         |              {u'type': u'boolean', u'des ...
          |    replication     |      {u'type': u'boolean', u'description'...
          | thin_provisioning  | {u'type': u'boolean', u'description': u'S...
          +--------------------+------------------------------------------+

      The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values.

2.3.2. Creating and configuring a volume type

You can create volume types so that you can apply associated settings to each volume type. For instance, you can create volume types to provide different levels of performance for your cloud users:

When your users create their volumes, they can select the appropriate volume type that fulfills their performance requirements.

Prerequisites

  • You must be a project administrator to create and configure volume types.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. Click Create Volume Type.
  4. Enter the volume type name in the Name field.
  5. Click Create Volume Type. The new type appears in the Volume Types table.
  6. Select the volume type’s View Extra Specs action.
  7. Click Create and specify the Key and Value. The key-value pair must be valid; otherwise, specifying the volume type during volume creation will result in an error.
  8. Click Create. The associated setting (key-value pair) now appears in the Extra Specs table.

By default, all volume types are accessible to all OpenStack projects. If you need to create volume types with restricted access, you will need to do so through the CLI. For instructions, see Creating and configuring private volume types.

2.3.3. Editing a volume type

Edit a volume type in the dashboard to modify the Extra Specs configuration of the volume type. You can also delete a volume type.

Prerequisites

  • You must be a project administrator to edit or delete volume types.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. In the Volume Types table, select the volume type’s View Extra Specs action.
  4. On the Extra Specs table of this page, you can:

    • Add a new setting to the volume type. To do this, click Create and specify the key/value pair of the new setting you want to associate to the volume type.
    • Edit an existing setting associated with the volume type by selecting the setting’s Edit action.
    • Delete existing settings associated with the volume type by selecting the extra specs' check box and clicking Delete Extra Specs in this and the next dialog screen.

To delete a volume type, select its corresponding check boxes from the Volume Types table and click Delete Volume Types.

2.3.4. Creating and configuring private volume types

By default, all volume types are available to all projects (tenants). You can create a restricted volume type by marking it private. To do so, set the is-public flag of the volume type to false, because the default value for this flag is true.

Private volume types are useful for restricting access to volumes with certain attributes. Typically, these are settings that should only be usable by specific projects. For instance, new back ends or ultra-high performance configurations that are being tested.

Prerequisites

  • You must be a project administrator to create, view, or configure access for private volume types.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create a new cinder volume type and set the is-public flag to false:

    $ cinder type-create --is-public false <type_name>
    • Replace <type_name> with the name that you want to call this new private volume type.

By default, private volume types are only accessible to their creators. However, admin users can find and view private volume types, by using the following command:

$ cinder type-list

This command lists the name and ID of both public and private volume types. You need the ID of the volume type to provide access to it.

Access to a private volume type is granted at the project level. You therefore need to know the ID of the required project. If you do not know this tenant ID but you do know a name of a user of this project, then run:

Note

If you are unsure of this user name, the openstack user list command lists the name and ID of all the configured users.

$ openstack user show <user_name>
  • Replace <user_name> with the name of a user of the required project to display a list of the user details, including the tenantId of the project to which this user is associated.

To grant a project access to a private volume type, run:

$ cinder  type-access-add --volume-type <type_id> --project-id <tenant_id>
  • Replace <type_id> with the ID of the required private volume type.
  • Replace <tenant_id> with the required tenant ID.

To view which projects have access to a private volume type, run:

$ cinder  type-access-list --volume-type <type_id>

To remove a project from the access list of a private volume type, run:

$ cinder  type-access-remove --volume-type <type_id> --project-id <tenant_id>

2.3.5. Defining a project-specific default volume type

Optional: For complex deployments, project administrators can define a default volume type for each project (tenant).

If you create a volume and do not specify a volume type, then Block Storage uses the default volume type.

You can use the default_volume_type option of the Block Storage (cinder) configuration file cinder.conf to define the general default volume type that applies to all your projects.

But if your Red Hat OpenStack Platform (RHOSP) deployment uses project-specific volume types, ensure that you define default volume types for each project. In this case, Block Storage uses the project-specific volume type instead of the general default volume type. The following RHOSP deployment examples need project-specific default volume types:

  • A distributed RHOSP deployment spanning many availability zones (AZs). Each AZ is in its own project and has its own volume types.
  • A RHOSP deployment for three different departments of a company. Each department is in its own project and has its own specialized volume type.

Prerequisites

  • At least one volume type in each project that will be the project-specific default volume type. For more information, see Creating and configuring a volume type.
  • Block Storage REST API microversion 3.62 or later.
  • Only project administrators can define, clear, or list default volume types for their projects.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Define, clear, or list the default volume type for a project:

    Note

    You must replace <project_id> in these commands, with the ID of the required project. To find the ID and name of each tenant, run the openstack project list command.

    • To define the default volume type for a project:

      $ cinder --os-volume-api-version 3.62 default-type-set <volume_type> <project_id>
      • Replace <volume_type> with the name or ID of the required volume type. You can run the cinder type-list command to list the name and ID of all the volume types.
    • To clear the default volume type for a project:

      $ cinder --os-volume-api-version 3.62 default-type-unset <project_id>
    • To list the default volume type for a project:

      $ cinder --os-volume-api-version 3.62 default-type-list --project <project_id>

2.4. Creating and configuring an internal project for the Block Storage service (cinder)

Some Block Storage features (for example, the Image-Volume cache) require the configuration of an internal tenant. The Block Storage service uses this tenant/project to manage block storage items that do not necessarily need to be exposed to normal users. Examples of such items are images cached for frequent volume cloning or temporary copies of volumes being migrated.

Procedure

  1. To configure an internal project, first create a generic project and user, both named cinder-internal. To do so, log in to the Controller node and run:
$ openstack project create --enable --description "Block Storage Internal Project" cinder-internal
    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |  Block Storage Internal Tenant   |
    |   enabled   |               True               |
    |      id     | cb91e1fe446a45628bb2b139d7dccaef |
    |     name    |         cinder-internal          |
    +-------------+----------------------------------+
$ openstack user create --project cinder-internal cinder-internal
    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |  email   |               None               |
    | enabled  |               True               |
    |    id    | 84e9672c64f041d6bfa7a930f558d946 |
    |   name   |         cinder-internal          |
    |project_id| cb91e1fe446a45628bb2b139d7dccaef |
    | username |         cinder-internal          |
    +----------+----------------------------------+

2.5. Configuring the image-volume cache

The Block Storage service features an optional Image-Volume cache which can be used when creating volumes from images. This cache is designed to improve the speed of volume creation from frequently-used images. For information on how to create volumes from images, see Creating Block Storage volumes.

When enabled, the Image-Volume cache stores a copy of an image the first time a volume is created from it. This stored image is cached locally to the Block Storage back end to help improve performance the next time the image is used to create a volume. The limit of the Image-Volume cache can be set to a size (in GB), number of images, or both.

The Image-Volume cache is supported by several back ends. If you are using a third-party back end, refer to its documentation for information on Image-Volume cache support.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. To enable and configure the Image-Volume cache on a back end, you must add the following values to an ExtraConfig section of an environment file included in your overcloud deployment command:

    parameter_defaults:
      ExtraConfig:
        cinder::config::cinder_config:
          DEFAULT/cinder_internal_tenant_project_id:
            value: TENANTID 1
          DEFAULT/cinder_internal_tenant_user_id:
            value: USERID 2
          BACKEND/image_volume_cache_enabled: 3
            value: True
          BACKEND/image_volume_cache_max_size_gb:
            value: MAXSIZE 4
          BACKEND/image_volume_cache_max_count:
            value: MAXNUMBER 5
    1
    Replace TENANTID with the ID of the cinder-internal project.
    2
    Replace USERID with the ID of the cinder-internal user.
    3
    Replace BACKEND with the name of the target back end (specifically, its volume_backend_name value).
    4
    By default, the Image-Volume cache size is only limited by the back end. Set MAXSIZE to the required size in GB.
    5
    Set MAXNUMBER to the maximum number of images.

    The Block Storage service database uses a time stamp to track when each cached image was last used to create an image. If either or both MAXSIZE and MAXNUMBER are set, the Block Storage service will delete cached images as needed to make way for new ones. Cached images with the oldest time stamp are deleted first whenever the Image-Volume cache limits are met.

  4. Save the updates to your environment file.
  5. Add your environment file to the stack with your other environment files and deploy the overcloud.

2.6. Block Storage service (cinder) Quality of Service specifications

You can apply performance limits to volumes that your cloud users create, by creating and associating Quality of Service (QoS) specifications to each volume type. For example, volumes that use higher performance QoS specifications could provide your users with more IOPS or users could assign lighter workloads to volumes that use lower performance QoS specifications to conserve resources.

Note

You must be a project administrator to create, configure, associate, and disassociate QoS specifications.

When you create a QoS specification you must choose the required consumer. The consumer determines where you want to apply the QoS limits and determines which QoS property keys are available to define the QoS limits. For more information about the available consumers, see Consumers of QoS specifications.

You can create volume performance limits by setting the required QoS property keys to your deployment specific values. For more information on the QoS property keys provided by the Block Storage service (cinder), see Block Storage QoS property keys.

To create a QoS specification and associate it with a volume type, complete the following tasks:

  1. Create and configure the QoS specification.
  2. Associate the QoS specification with a volume type.

You can create, configure, and associate a QoS specification to a volume type by using the Dashboard, or by using the CLI.

2.6.1. Consumers of QoS specifications

When you create a QoS specification you must choose the required consumer. The consumer determines where you want to apply the QoS limits and determines which QoS property keys are available to define the QoS limits. The Block Storage service (cinder) supports the following consumers of QoS specifications:

  • front-end: The Compute service (nova) applies the QoS limits when the volume is attached to an instance. The Compute service supports all the QoS property keys provided by the Block Storage service.
  • back-end: The back-end driver of the associated volume type applies the QoS limits. Each back-end driver supports their own set of QoS property keys. For more information on which QoS property keys the driver supports, see the back-end driver documentation.

    You would use the back-end consumer in cases where the front-end consumer is not supported. For instance, when attaching volumes to bare metal nodes through the Bare Metal Provisioning service (ironic).

  • both: Both consumers apply the QoS limits, where possible. This consumer type therefore supports the following QoS property keys:

    • When a volume is attached to an instance, then you can use every QoS property key that both the Compute service and the back-end driver supports.
    • When the volume is not attached to an instance, then you can only use the QoS property keys that the back-end driver supports.

2.6.2. Block Storage QoS property keys

The Block Storage service provides you with QoS property keys so that you can limit the performance of the volumes that your cloud users create. These limits use the following two industry standard measurements of storage volume performance:

  • Input/output operations per second (IOPS)
  • Data transfer rate, measured in bytes per second

The consumer of the QoS specification determines which QoS property keys are supported. For more information, see Consumers of QoS specifications.

Block Storage cannot perform error checking of QoS property keys, because some QoS property keys are defined externally by back-end drivers. Therefore, Block Storage ignores any invalid or unsupported QoS property key.

Important

Ensure that you spell the QoS property keys correctly. The volume performance limits that contain incorrectly spelled property keys are ignored.

For both the IOPS and data transfer rate measurements, you can configure the following performance limits:

Fixed limits
Typically, fixed limits should define the average usage of the volume performance measurement.
Burst limits

Typically, burst limits should define periods of intense activity of the volume performance measurement. A burst limit makes allowance for an increased rate of activity for a specific time, while keeping the fixed limits low for average usage.

Note

The burst limits all use a burst length of 1 second.

Total limits

Specify a global limit for both the read and write operations of the required performance limit, by using the total_* QoS property key.

Note

Instead of using a total limit you can apply separate limits to the read and write operations or choose to limit only the read or write operations.

Read limits

Specify a limit that only applies to the read operations of the required performance limit, by using the read_* QoS property key.

Note

This limit is ignored when you specify a total limit.

Write limits

Specify a limit that only applies to the write operations of the required performance limit, by using the write_* QoS property key.

Note

This limit is ignored when you specify a total limit.

You can use the following Block Storage QoS property keys to create volume performance limits for your deployment:

Note

The default value for all QoS property keys is 0, which means that the limit is unrestricted.

Table 2.1. Block Storage QoS property keys
Performance limitMeasurement unitQoS property keys

Fixed IOPS

IOPS

total_iops_sec

read_iops_sec

write_iops_sec

Fixed IOPS calculated by the size of the volume.

For more information about the usage restrictions of these limits, see QoS limits that scale according to volume size.

IOPS per GB

total_iops_sec_per_gb

read_iops_sec_per_gb

write_iops_sec_per_gb

Burst IOPS

IOPS

total_iops_sec_max

read_iops_sec_max

write_iops_sec_max

Fixed data transfer rate

Bytes per second

total_bytes_sec

read_bytes_sec

write_bytes_sec

Burst data transfer rate

Bytes per second

total_bytes_sec_max

read_bytes_sec_max

write_bytes_sec_max

Size of an IO request when calculating IOPS limits.

For more information, see Set the IO request size for IOPS limits.

Bytes

size_iops_sec

2.6.2.1. Set the IO request size for IOPS limits

If you implement IOPS volume performance limits, you should also specify the typical IO request size to prevent users from circumventing these limits. If you do not then users could submit several large IO requests instead of a lot of smaller ones.

Use the size_iops_sec QoS property key to specify the maximum size, in bytes, of a typical IO request. The Block Storage service uses this size to calculate the proportional number of typical IO requests for each IO request that is submitted, for example:

size_iops_sec=4096

  • An 8 KB request is counted as two requests.
  • A 6 KB request is counted as one and a half requests.
  • Any request less than 4 KB is counted as one request.

The Block Storage service only uses this IO request size limit when calculating IOPS limits.

Note

The default value of size_iops_sec is 0, which ignores the size of IO requests when applying IOPS limits.

2.6.2.2. IOPS limits that scale according to volume size

You can create IOPS volume performance limits that are determined by the capacity of the volumes that your users create. These Quality of Service (QoS) limits scale with the size of the provisioned volumes. For example, if the volume type has an IOPS limit of 500 per GB of volume size for read operations, then a provisioned 3 GB volume of this volume type would have a read IOPS limit of 1500.

Important

The size of the volume is determined when the volume is attached to an instance. Therefore if the size of the volume is changed while it is attached to an instance, these limits are only recalculated for the new volume size when this volume is detached and then reattached to an instance.

You can use the following QoS property keys, specified in IOPS per GB, to create scalable volume performance limits:

  • total_iops_sec_per_gb: Specify a global IOPS limit per GB of volume size for both the read and write operations.

    Note

    Instead of using a total limit you can apply separate limits to the read and write operations or choose to limit only the read or write operations.

  • read_iops_sec_per_gb: Specify a IOPS limit per GB of volume size that only applies to the read operations.

    Note

    This limit is ignored when you specify a total limit.

  • write_iops_sec_per_gb: Specify a IOPS limit per GB of volume size that only applies to the write operations.

    Note

    This limit is ignored when you specify a total limit.

Important

The consumer of the QoS specification containing these QoS limits can either be front-end or both, but not back-end. For more information, see Consumers of QoS specifications.

2.6.3. Creating and configuring a QoS specification with the Dashboard

A Quality of Service (QoS) specification is a list of volume performance QoS limits. You create each QoS limit by setting a QoS property key to your deployment specific value. To apply the QoS performance limits to a volume, you must associate the QoS specification with the required volume type.

Prerequisites

  • You must be a project administrator to create, configure, associate, and disassociate QoS specifications.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. On the QoS Specs table, click Create QoS Spec.
  4. Enter a name for the QoS Spec.
  5. In the Consumer field, choose the consumer of this QoS specification. For more information, see Consumers of QoS specifications.
  6. Click Create. The new QoS specification is displayed in the QoS Specs table.
  7. In the QoS Specs table, select the Manage Specs action of your new QoS specification to open the Specs window, where you add the QoS performance limits.
  8. Click Create in the Specs window to open the Create Extra Specs window.
  9. Specify the QoS property key for a QoS performance limit in the Key field, and set the performance limit value in the Value field. For more information on the available property keys, see Block Storage QoS property keys.

    Important

    Ensure that you spell the QoS property keys correctly. The volume performance limits that contain incorrectly spelled property keys are ignored.

  10. Click Create to add the QoS limit to your QoS specification.
  11. Repeat steps 7 to 10 for each QoS limit that you want to add to your QoS specification.

2.6.4. Creating and configuring a QoS specification with the CLI

A Quality of Service (QoS) specification is a list of volume performance QoS limits. You create each QoS limit by setting a QoS property key to your deployment specific value. To apply the QoS performance limits to a volume, you must associate the QoS specification with the required volume type.

Prerequisites

  • You must be a project administrator to create, configure, associate, and disassociate QoS specifications.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create the QoS specification:

    $ openstack volume qos create [--consumer <qos_spec_consumer>] <qos_spec_name>
    • Optional: Replace <qos_spec_consumer> with the required consumer of this QoS specification. If not specified, the consumer defaults to both. For more information, see Consumers of QoS specifications.
    • Replace <qos_spec_name> with the name of your QoS specification.
  3. Add the performance limits to the QoS specification, by specifying a separate --property <key=value> argument for each QoS limit that you want to add:

    $ openstack volume qos set --property <key>=<value> <qos_spec_name>
    • Replace <key> with the QoS property key of the required performance constraint. For more information, see Block Storage QoS property keys.

      Important

      Ensure that you spell the QoS property keys correctly. The volume performance limits that contain incorrectly spelled property keys are ignored.

    • Replace <value> with your deployment-specific limit for this performance constraint, in the measurement unit required by the QoS property key.
    • Replace <qos_spec_name> with the name or ID of your QoS specification.

      Example:

      $ openstack volume qos set \
       --property read_iops_sec=5000 \
       --property write_iops_sec=7000 \
       myqoslimits
  4. Review the QoS specification:

    $ openstack volume qos list
    +--------------------------------------+---------+-----------+--------------+-----------------------------------------------------+
    | ID                                   | Name    | Consumer  | Associations | Properties                                          |
    +--------------------------------------+---------+-----------+--------------+-----------------------------------------------------+
    | 204c6ba2-c67c-4ac8-918a-03f101811235 | myqoslimits | front-end |              | read_iops_sec='5000', write_iops_sec='7000' |
    +--------------------------------------+---------+-----------+--------------+-----------------------------------------------------+

    This command provides a table of the configuration details of all the configured QoS specifications.

2.6.5. Associating a QoS specification with a volume type by using the Dashboard

You must associate a Quality of Service (QoS) specification with an existing volume type to apply the QoS limits to volumes.

Important

If a volume is already attached to an instance, then the QoS limits are only applied to this volume when the volume is detached and then reattached to this instance.

Prerequisites

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. In the Volume Types table, select the Manage QoS Spec Association action of the required volume type.
  4. Select the required QoS specification from the QoS Spec to be associated list.
  5. Click Associate. The QoS specification is added to the Associated QoS Spec column of the edited volume type.

2.6.6. Associating a QoS specification with a volume type by using the CLI

You must associate a Quality of Service (QoS) specification with an existing volume type to apply the QoS limits to volumes.

Important

If a volume is already attached to an instance, then the QoS limits are only applied to this volume when the volume is detached and then reattached to this instance.

Prerequisites

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Associate the required QoS specification with the required volume type:

    $ openstack volume qos associate <qos_spec_name> <volume_type>
    • Replace <qos_spec_name> with the name or ID of the QoS specification. You can run the openstack volume qos list command to list the name and ID of all the QoS specifications.
    • Replace <volume_type> with the name or ID of the volume type. You can run the cinder type-list command to list the name and ID of all the volume types.
  3. Verify that the QoS specification has been associated:

    $ openstack volume qos list

    The Associations column of the output table shows which volume types are associated with this QoS specification.

2.6.7. Disassociating a QoS specification from a volume type with the Dashboard

You can disassociate a Quality of Service (QoS) specification from a volume type when you no longer want the QoS limits to be applied to volumes of that volume type.

Important

If a volume is already attached to an instance, then the QoS limits are only removed from this volume when the volume is detached and then reattached to this instance.

Prerequisites

  • You must be a project administrator to create, configure, associate, and disassociate QoS specifications.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. In the Volume Types table, select the Manage QoS Spec Association action of the required volume type.
  4. Select None from the QoS Spec to be associated list.
  5. Click Associate.

    The QoS specification should be removed from the Associated QoS Spec column of the edited volume type.

2.6.8. Disassociating a QoS specification from volume types with the CLI

You can disassociate a Quality of Service (QoS) specification from a volume type when you no longer want the QoS limits to be applied to volumes of that volume type.

Important

If a volume is already attached to an instance, then the QoS limits are only removed from this volume when the volume is detached and then reattached to this instance.

Prerequisites

  • You must be a project administrator to create, configure, associate, and disassociate QoS specifications.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Disassociate the volume types associated with the QoS specification. You can either disassociate a specific volume type, or all volumes types when more than one volume type is associated to the same QoS specification:

    • To disassociate a specific volume type associated with the QoS specification:

      $ openstack volume qos disassociate <qos_spec_name> --volume-type <volume_type>
      • Replace <qos_spec_name> with the name or ID of the QoS specification. You can run the openstack volume qos list command to list the name and ID of all the QoS specifications.
      • Replace <volume_type> with the name or ID of the volume type associated with this QoS specification. You can run the cinder type-list command to list the name and ID of all the volume types.
    • To disassociate all volume types associated with the QoS specification:

      $ openstack volume qos disassociate <qos_spec_name> --all
  3. Verify that the QoS specification has been disassociated:

    $ openstack volume qos list

    The Associations column of this QoS specification should either not specify the volume type or be empty.

2.7. Block Storage service (cinder) volume encryption

Volume encryption helps provide basic data protection in case the volume back-end is either compromised or outright stolen. Both Compute and Block Storage services are integrated to allow instances to read access and use encrypted volumes. You must deploy Barbican to take advantage of volume encryption.

Important
  • Volume encryption is not supported on file-based volumes (such as NFS).
  • Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes.

Volume encryption is applied through volume type. For information on encrypted volume types, see Configuring Block Storage service volume encryption with the Dashboard or Configuring Block Storage service volume encryption with the CLI.

For more information, on using the OpenStack Key Manager (barbican) to manage your Block Storage (cinder) encryption keys, see Encrypting Block Storage (cinder) volumes.

2.7.1. Configuring Block Storage service volume encryption with the Dashboard

To create encrypted volumes, you first need an encrypted volume type. Encrypting a volume type involves setting what provider class, cipher, and key size it should use. You can also re-configure the encryption settings of an encrypted volume type.

You can invoke encrypted volume types to automatically create encrypted volumes.

Prerequisites

  • You must be a project administrator to create encrypted volumes.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Admin > Volumes > Volume Types.
  3. In the Actions column of the volume to be encrypted, select Create Encryption to launch the Create Volume Type Encryption wizard.
  4. From there, configure the Provider, Control Location, Cipher, and Key Size settings of the volume type’s encryption. The Description column describes each setting.

    Important

    The values listed below are the only supported options for Provider, Cipher, and Key Size.

    1. Enter luks for Provider.
    2. Enter aes-xts-plain64 for Cipher.
    3. Enter 256 for Key Size.
  5. Click Create Volume Type Encryption.

You can also re-configure the encryption settings of an encrypted volume type.

  1. Select Update Encryption from the Actions column of the volume type to launch the Update Volume Type Encryption wizard.
  2. In Project > Compute > Volumes, check the Encrypted column in the Volumes table to determine whether the volume is encrypted.
  3. If the volume is encrypted, click Yes in that column to view the encryption settings.

2.7.2. Configuring Block Storage service volume encryption with the CLI

To create encrypted volumes, you first need an encrypted volume type. Encrypting a volume type involves setting what provider class, cipher, and key size it should use.

Prerequisites

  • You must be a project administrator to create encrypted volumes.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create a volume type:

    $ cinder type-create myEncType
  3. Configure the cipher, key size, control location, and provider settings:

    $ cinder encryption-type-create --cipher aes-xts-plain64 --key-size 256 --control-location front-end myEncType luks
  4. Create an encrypted volume:

    $ cinder --debug create 1 --volume-type myEncType --name myEncVol

2.7.3. Automatic deletion of volume image encryption key

The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image.

Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key.

The Block Storage service automatically adds two properties to a volume image:

  • cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image.
  • cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image.
Important

The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values.

When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion. When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion.

Important

Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss.

2.8. Deploying availability zones for Block Storage volume back ends

An availability zone is a provider-specific method of grouping cloud instances and services. Director uses CinderXXXAvailabilityZone parameters (where XXX is associated with a specific back end) to configure different availability zones for Block Storage volume back ends.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Add the following parameters to the environment file to create two availability zones:

    parameter_defaults:
     CinderXXXAvailabilityZone: zone1
     CinderYYYAvailabilityZone: zone2
    • Replace XXX and YYY with supported back-end values, such as:

      CinderISCSIAvailabilityZone
      CinderNfsAvailabilityZone
      CinderRbdAvailabilityZone
      Note

      Search the /usr/share/openstack-tripleo-heat-templates/deployment/cinder/ directory for the heat template associated with your back end for the correct back end value.

      The following example deploys two back ends where rbd is zone 1 and iSCSI is zone 2:

      parameter_defaults:
       CinderRbdAvailabilityZone: zone1
       CinderISCSIAvailabilityZone: zone2
  4. Save the updates to your environment file.
  5. Add your environment file to the stack with your other environment files and deploy the overcloud.

2.9. Block Storage service (cinder) consistency groups

You can use the Block Storage (cinder) service to set consistency groups to group multiple volumes together as a single entity. This means that you can perform operations on multiple volumes at the same time instead of individually. You can use consistency groups to create snapshots for multiple volumes simultaneously. This also means that you can restore or clone those volumes simultaneously.

A volume can be a member of multiple consistency groups. However, you cannot delete, retype, or migrate volumes after you add them to a consistency group.

2.9.1. Configuring Block Storage service consistency groups

By default, Block Storage security policy disables consistency groups APIs. You must enable it here before you use the feature. The related consistency group entries in the /etc/cinder/policy.json file of the node that hosts the Block Storage API service, openstack-cinder-api list the default settings:

"consistencygroup:create" : "group:nobody",
​"consistencygroup:delete": "group:nobody",
​"consistencygroup:update": "group:nobody",
​"consistencygroup:get": "group:nobody",
​"consistencygroup:get_all": "group:nobody",
​"consistencygroup:create_cgsnapshot" : "group:nobody",
​"consistencygroup:delete_cgsnapshot": "group:nobody",
​"consistencygroup:get_cgsnapshot": "group:nobody",
​"consistencygroup:get_all_cgsnapshots": "group:nobody",

You must change these settings in an environment file and then deploy them to the overcloud by using the openstack overcloud deploy command. Do not edit the JSON file directly because the changes are overwritten next time the overcloud is deployed.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Edit an environment file and add a new entry to the parameter_defaults section. This ensures that the entries are updated in the containers and are retained whenever the environment is re-deployed by director with the openstack overcloud deploy command.
  4. Add a new section to an environment file using CinderApiPolicies to set the consistency group settings. The equivalent parameter_defaults section with the default settings from the JSON file appear in the following way:

    parameter_defaults:
      CinderApiPolicies: { \
         cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'group:nobody' }, \
         cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'group:nobody' },  \
         cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'group:nobody' }, \
         cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'group:nobody' }, \
         cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'group:nobody' }, \
         cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'group:nobody' }, \
         cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'group:nobody' }, \
     }
  5. The value ‘group:nobody’ determines that no group can use this feature so it is effectively disabled. To enable it, change the group to another value.
  6. For increased security, set the permissions for both consistency group API and volume type management API to be identical. The volume type management API is set to "rule:admin_or_owner" by default in the same /etc/cinder/policy.json_ file:

    "volume_extension:types_manage": "rule:admin_or_owner",
  7. To make the consistency groups feature available to all users, set the API policy entries to allow users to create, use, and manage their own consistency groups. To do so, use rule:admin_or_owner:

    CinderApiPolicies: { \
         cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'rule:admin_or_owner' },  \
         cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'rule:admin_or_owner' }, \
         cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'rule:admin_or_owner’ }, \
     }
  8. Save the updates to your environment file.
  9. Add your environment file to the stack with your other environment files and deploy the overcloud.

2.9.2. Creating Block Storage consistency groups with the Dashboard

After you enable the consistency groups API, you can start creating consistency groups.

Prerequisites

  • You must be a project administrator or a volume owner to create consistency groups.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user or a volume owner.
  2. Select Project > Compute > Volumes > Volume Consistency Groups.
  3. Click Create Consistency Group.
  4. In the Consistency Group Information tab of the wizard, enter a name and description for your consistency group. Then, specify its Availability Zone.
  5. You can also add volume types to your consistency group. When you create volumes within the consistency group, the Block Storage service will apply compatible settings from those volume types. To add a volume type, click its + button from the All available volume types list.
  6. Click Create Consistency Group. It appears next in the Volume Consistency Groups table.

2.9.3. Managing Block Storage service consistency groups with the Dashboard

You can manage consistency groups for Block Storage volumes in the dashboard.

Prerequisites

  • You must be a project administrator to manage consistency groups.
  • Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Introduction to the OpenStack Dashboard.

Procedure

  1. Log into the dashboard as an admin user.
  2. Select Project > Compute > Volumes > Volume Consistency Groups.
  3. Optional: You can change the name or description of a consistency group by selecting Edit Consistency Group from its Action column.
  4. To add or remove volumes from a consistency group directly, find the consistency group you want to configure. In the Actions column of that consistency group, select Manage Volumes. This launches the Add/Remove Consistency Group Volumes wizard.

    1. To add a volume to the consistency group, click its + button from the All available volumes list.
    2. To remove a volume from the consistency group, click its - button from the Selected volumes list.
  5. Click Edit Consistency Group.

2.9.4. Creating and managing consistency group snapshots for the Block Storage service

After you add volumes to a consistency group, you can create snapshots from it.

Prerequisites

  • You must be a project administrator to create and manage consistency group snapshots.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. List all available consistency groups and their respective IDs:

    $ cinder consisgroup-list
  3. Create snapshots using the consistency group:

    $ cinder cgsnapshot-create [--name <cgsnapname>] [--description "<description>"] <cgnameid>
    • Replace <cgsnapname> with the name of the snapshot.
    • Replace <description> with a description of the snapshot.
    • Replace <cgnameid> with the name or ID of the consistency group.
  4. Display a list of all available consistency group snapshots:

    # cinder cgsnapshot-list

2.9.5. Cloning Block Storage service consistency groups

You can also use consistency groups to create a whole batch of pre-configured volumes simultaneously. You can do this by cloning an existing consistency group or restoring a consistency group snapshot. Both processes use the same command.

Prerequisites

  • You must be a project administrator to clone consistency groups and restore consistency group snapshots.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. To clone an existing consistency group:

    $ cinder consisgroup-create-from-src --source-cg <cgnameid> [--name <cgname>] [--description "<description>"]
    • Replace <cgnameid> with the name or ID of the consistency group you want to clone.
    • Replace <cgname> with the name of your consistency group.
    • Replace <description> with a description of your consistency group.
  3. To create a consistency group from a consistency group snapshot:

    $ cinder consisgroup-create-from-src --cgsnapshot <cgsnapname> [--name <cgname>] [--description "<description>"]
    • Replace <cgsnapname> with the name or ID of the snapshot you are using to create the consistency group.

2.10. Configuring the default Block Storage scheduler filters

If the volume back end is not specified during volume creation, then the Block Storage scheduler uses filters to select suitable back ends. Ensure that you configure the following default filters:

AvailabilityZoneFilter
Filters out all back ends that do not meet the availability zone requirements of the requested volume.
CapacityFilter
Selects only back ends with enough space to accommodate the volume.
CapabilitiesFilter
Selects only back ends that can support any specified settings in the volume.
InstanceLocality
Configures clusters to use volumes local to the same node.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Add an environment file to your overcloud deployment command that contains the following parameters:

    parameter_defaults:
      ControllerExtraConfig: # 1
        cinder::config::cinder_config:
          DEFAULT/scheduler_default_filters:
            value: 'AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,InstanceLocality'
    1
    You can also add the ControllerExtraConfig: hook and its nested sections to the parameter_defaults: section of an existing environment file.
  4. Save the updates to your environment file.
  5. Add your environment file to the stack with your other environment files and deploy the overcloud.

2.11. Enabling LVM2 filtering on overcloud nodes

If you use LVM2 (Logical Volume Management) volumes with certain Block Storage service (cinder) back ends, the volumes that you create inside Red Hat OpenStack Platform (RHOSP) guests might become visible on the overcloud nodes that host cinder-volume or nova-compute containers. In this case, the LVM2 tools on the host scan the LVM2 volumes that the OpenStack guest creates, which can result in one or more of the following problems on Compute or Controller nodes:

  • LVM appears to see volume groups from guests
  • LVM reports duplicate volume group names
  • Volume detachments fail because LVM is accessing the storage
  • Guests fail to boot due to problems with LVM
  • The LVM on the guest machine is in a partial state due to a missing disk that actually exists
  • Block Storage service (cinder) actions fail on devices that have LVM
  • Block Storage service (cinder) snapshots fail to remove correctly
  • Errors during live migration: /etc/multipath.conf does not exist

To prevent this erroneous scanning, and to segregate guest LVM2 volumes from the host node, you can enable and configure a filter with the LVMFilterEnabled heat parameter when you deploy or update the overcloud. This filter is computed from the list of physical devices that host active LVM2 volumes. You can also allow and deny block devices explicitly with the LVMFilterAllowlist and LVMFilterDenylist parameters. You can apply this filtering globally, to specific node roles, or to specific devices.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Add an environment file to your overcloud deployment command that contains the following parameter:

    parameter_defaults:
      LVMFilterEnabled: true

    You can further customize the implementation of the LVM2 filter. For example, to enable filtering only on Compute nodes, use the following configuration:

    parameter_defaults:
      ComputeParameters:
        LVMFilterEnabled: true

    These parameters also support regular expression. To enable filtering only on Compute nodes, and ignore all devices that start with /dev/sd, use the following configuration:

    parameter_defaults:
      ComputeParameters:
        LVMFilterEnabled: true
        LVMFilterDenylist:
          - /dev/sd.*
  4. Save the updates to your environment file.
  5. Add your environment file to the stack with your other environment files and deploy the overcloud.

2.12. Multipath configuration

Use multipath to configure multiple I/O paths between server nodes and storage arrays into a single device to create redundancy and improve performance.

2.12.1. Using director to configure multipath

You can configure multipath on a Red Hat OpenStack Platform (RHOSP) overcloud deployment for greater bandwidth and networking resiliency.

Important

When you configure multipath on an existing deployment, the new workloads are multipath aware. If you have any pre-existing workloads, you must shelve and unshelve the instances to enable multipath on these instances.

Prerequisites

  • The undercloud is installed. For more information, see Installing director in Director Installation and Usage.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Use an overrides environment file or create a new one, for example multipath_overrides.yaml. Add and set the following parameter:

    parameter_defaults:
      ExtraConfig:
        cinder::config::cinder_config:
          backend_defaults/use_multipath_for_image_xfer:
            value: true
    Note

    The default settings will generate a basic multipath configuration that works for most environments. However, check with your storage vendor for recommendations, because some vendors have optimized configurations that are specific to their hardware. For more information about multipath, see Configuring device mapper multipath.

  4. Optional: If you have a multipath configuration file for your overcloud deployment, then you can use the MultipathdCustomConfigFile parameter to specify the location of this file:

    parameter_defaults:
      MultipathdCustomConfigFile: <config_file_directory>/<config_file_name>

    In the following example, /home/stack is the directory of the multipath configuration file and multipath.conf is the name of this file:

    parameter_defaults:
      MultipathdCustomConfigFile: /home/stack/multipath.conf
    Note

    Other TripleO multipath parameters override any corresponding value in the local custom configuration file. For example, if MultipathdEnableUserFriendlyNames is False, the files on the overcloud nodes are updated to match, even if the setting is enabled in the local custom file.

    For more information about multipath parameters, see Multipath heat template parameters.

  5. Save the updates to your overrides environment file.
  6. Add your overrides environment file to the stack with your other environment files, such as:

    ----
    /usr/share/openstack-tripleo-heat-templates/environments/multipathd.yaml
    ----
  7. Deploy the overcloud.

Additional resources

2.12.1.1. Multipath heat template parameters

Use this to understand the following parameters that enable multipath.

ParameterDescriptionDefault value

MultipathdEnable

Defines whether to enable the multipath daemon. This parameter defaults to True through the configuration contained in the multipathd.yaml file

True

MultipathdEnableUserFriendlyNames

Defines whether to enable the assignment of a user friendly name to each path.

False

MultipathdEnableFindMultipaths

Defines whether to automatically create a multipath device for each path.

True

MultipathdSkipKpartx

Defines whether to skip automatically creating partitions on the device.

True

MultipathdCustomConfigFile

Includes a local, custom multipath configuration file on the overcloud nodes. By default, a minimal multipath.conf file is installed.

NOTE: Other TripleO multipath parameters override any corresponding value in any local, custom configuration file that you add. For example, if MultipathdEnableUserFriendlyNames is False, the files on the overcloud nodes are updated to match, even if the setting is enabled in your local, custom file.

 

2.12.2. Verifying multipath configuration

You can verify multipath configuration on new or existing overcloud deployments.

Procedure

  1. Create an instance.
  2. Attach a non-encrypted volume to the instance.
  3. Get the name of the Compute node that contains the instance:

    $ nova show <instance> | grep OS-EXT-SRV-ATTR:host

    Replace <instance> with the name of the instance that you created.

  4. Retrieve the virsh name of the instance:

    $ nova show <instance> | grep instance_name
  5. Get the IP address of the Compute node:

    $ . stackrc
    $ metalsmith list | grep <compute_name>

    Replace <compute_name> with the name from the output of the nova show <instance> command to display two rows, from a table of six columns.

    Find the row in which <compute_name> is in the fourth column. The IP address of <compute_name> is in the last column of this row.

    In the following example, the IP address of compute-0 is 192.168.24.15 because compute-0 is in the fourth column of the second row:

    $ . stackrc
    $ metalsmith list | grep compute-0
    | 3b1bf72e-c425-494c-9717-d0b89bb66580 | compute-0    | 95b21d3e-36be-470d-ba5c-70d5dcd6d0b3 | compute-1    | ACTIVE | ctlplane=192.168.24.49 |
    | 72a24883-25f9-435c-bf71-a20e66be172d | compute-1    | a59f79f7-006e-4f38-a9ad-8164da47d58e | compute-0    | ACTIVE | ctlplane=192.168.24.15 |
  6. SSH into the Compute node that runs the instance:

    $ ssh tripleo-admin@<compute_node_ip>

    Replace <compute_node_ip> with the IP address of the Compute node.

  7. Log in to the container that runs virsh:

    $ podman exec -it nova_libvirt /bin/bash
  8. Enter the following command on a Compute node instance to verify that it is using multipath in the cinder volume host location:

    virsh domblklist <virsh_instance_name> | grep /dev/dm

    Replace <virsh_instance_name> with the output of the nova show <instance> | grep instance_name command.

    If the instance shows a value other than /dev/dm-, the connection is non-multipath and you must refresh the connection info with the nova shelve and nova unshelve commands:

    $ nova shelve <instance>
    $ nova unshelve <instance>
    Note

    If you have more than one type of back end, you must verify the instances and volumes on all back ends, because connection info that each back end returns might vary.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.