Ce contenu n'est pas disponible dans la langue sélectionnée.

2.2. Volume drivers


To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
To set a volume driver, use the volume_driver flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
Copy to Clipboard Toggle word wrap

2.2.1. Ceph RADOS Block Device (RBD)

If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.

Figure 2.1. Ceph architecture

RADOS

Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
  • Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).
    You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
  • Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
  • Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three ceph-mon daemons on separate servers.
Ceph developers recommend that you use Btrfs as a file system for storage. XFS might be a better alternative for production environments;XFS is an excellent alternative to Btrfs. The ext4 file system is also compatible but does not exploit the power of Ceph.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.

Ways to store, use, and expose data

To store and access your data, you can use the following storage systems:
  • RADOS. Use as an object, default storage mechanism.
  • RBD. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
  • CephFS. Use as a file, POSIX-compliant file system.
Ceph exposes RADOS; you can access it through the following interfaces:
  • RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
  • librados, and its related C/C++ bindings.
  • RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.

Driver options

The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Deprecation notice
The volume_tmp_dir option has been deprecated and replaced by image_conversion_dir.
Expand
Table 2.1. Description of Ceph storage configuration options
Configuration option = Default value Description
[DEFAULT]
rados_connect_timeout = -1 (IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rbd_ceph_conf = (StrOpt) Path to the ceph configuration file
rbd_flatten_volume_from_snapshot = False (BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (StrOpt) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (IntOpt) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir = None (StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead.

2.2.2. Dell EqualLogic volume driver

The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.

Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Clone a volume.
The OpenStack Block Storage service supports:
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the /etc/cinder/cinder.conf file (see Section 2.4, “Block Storage sample configuration files” for reference).
Expand
Table 2.2. Description of Dell EqualLogic volume driver configuration options
Configuration option = Default value Description
[DEFAULT]
eqlx_chap_login = admin (StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release.
eqlx_chap_password = password (StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release
eqlx_cli_max_retries = 5 (IntOpt) Maximum retry count for reconnection. Default is 5.
eqlx_cli_timeout = 30 (IntOpt) Timeout for the Group Manager cli command execution. Default is 30.
eqlx_group_name = group-0 (StrOpt) Group name to use for creating volumes. Defaults to "group-0".
eqlx_pool = default (StrOpt) Pool in which volumes will be created. Defaults to "default".
eqlx_use_chap = False (BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release.
The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:

Example 2.1. Default (single-instance) configuration

[DEFAULT]
#Required settings

volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

#Optional settings

san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_timeout = 30
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5
Copy to Clipboard Toggle word wrap
In this example, replace the following variables accordingly:
IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
SAN_UNAME
The user name to login to the Group manager via SSH at the san_ip. Default user name is grpadmin.
SAN_PW
The corresponding password of SAN_UNAME. Not used when san_private_key is set. Default password is password.
EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is group-0.
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
The CHAP login account for each volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin.
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when san_password is set. There is no default value.
In addition, enable thin provisioning for SAN volumes using the default san_thin_provision = true setting.

Example 2.2. Multi back-end Dell EqualLogic configuration

The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:
enabled_backends = backend1,backend2
san_ssh_port = 22
​ssh_conn_timeout = 30
​san_thin_provision = true
      ​
​[backend1]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend1
​san_ip = IP_EQLX1
​san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
      ​
​[backend2]
​volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
​volume_backend_name = backend2
​san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
​eqlx_group_name = EQLX_GROUP
​eqlx_pool = EQLX_POOL
Copy to Clipboard Toggle word wrap
In this example:
  • Thin provisioning for SAN volumes is enabled (san_thin_provision = true). This is recommended when setting up Dell EqualLogic back ends.
  • Each Dell EqualLogic back-end configuration ([backend1] and [backend2]) has the same required settings as a single back-end configuration, with the addition of volume_backend_name.
  • The san_ssh_port option is set to its default value, 22. This option sets the port used for SSH.
  • The ssh_conn_timeout option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH.
  • The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of backend1 and backend2 through SSH, respectively.

2.2.3. Dell Storage Center iSCSI drivers

The Dell Storage Center volume driver interacts with configured Storage Center arrays.
The Dell Storage Center driver manages Storage Center arrays through Enterprise Manager. Enterprise Manager connection settings and Storage Center options are defined in the cinder.conf file.
Prerequisite: Dell Enterprise Manager 2015 R1 or later must be used.

Supported operations

The Dell Storage Center volume driver provides the following Cinder volume operations:
  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.

Extra spec options

Volume type extra specs can be used to select different Storage Profiles.
Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.
By default, if no Storage Profile is specified in the volume extra specs, the default Storage Profile for the user account configured for the Block Storage driver is used. The extra spec key storagetype:storageprofile with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.
For ease of use from the command line, spaces in Storage Profile names are ignored. As an example, here is how to define two volume types using the High Priority and Low Priority Storage Profiles:
$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority
Copy to Clipboard Toggle word wrap

Driver options

The following table contains the configuration options specific to the Dell Storage Center volume driver.
Expand
Table 2.3. Description of Dell Storage Center volume driver configuration options
Configuration option = Default value Description
[DEFAULT]
dell_sc_api_port = 3033 (IntOpt) Dell API port
dell_sc_server_folder = openstack (StrOpt) Name of the server folder to use on the Storage Center
dell_sc_ssn = 64702 (IntOpt) Storage Center System Serial Number
dell_sc_volume_folder = openstack (StrOpt) Name of the volume folder to use on the Storage Center

iSCSI configuration

The following snippet displays the required and optional settings for a single iSCSI back end:

Example 2.3. Sample iSCSI Configuration

default_volume_type = delliscsitype
enabled_backends = delliscsi

[delliscsi]
# Required settings
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
san_ip = IP_SC
san_login = SAN_UNAME
san_password = SAN_PW
iscsi_ip_address = ISCSI_IP
dell_sc_ssn = SERIAL

# Optional settings
dell_sc_api_port = API_PORT
dell_sc_server_folder = SERVFOLDER
dell_sc_volume_folder = VOLFOLDER
# The iSCSI IP port
iscsi_port = ISCSI_PORT
Copy to Clipboard Toggle word wrap
Where:
IP_SC
The IP address used to reach the Dell Enterprise Manager. This field has no default value.
SAN_UNAME
The user name to login to the Dell Enterprise Manager at the IP_EQLX. Default user name is Admin.
SAN_PW
The corresponding password of SAN_UNAME. Default password is password.
ISCSI_IP
The IP address that the iSCSI daemon is listening on. In this case, ISCSI_IP is the IP address of the Dell Storage Center iSCSI.
SERIAL
The Dell Storage Center serial number to use. Default is 64702.
API_PORT
The Dell Enterprise Manager API port. Default is 3033
SERVFOLDER
The Server folder in Dell Storage Center where the new server definitions are placed.
VOLFOLDER
The Server folder in Dell Storage Center where the new volumes are created.
ISCSI_PORT
The corresponding port of the Dell Storage Center array. This parameter is optional, and defaults to 3036
Each back end's name is defined in its section header (in this case, delliscsi). To enable a back end, add its name to the enabled_backends setting. In the case of multiple back ends, enable them by adding their respective names to enabled_backends as a comma-delimited list.

2.2.4. EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.
The drivers perform volume operations by communicating with the backend VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for VMAX storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

2.2.4.1. System requirements

EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family is supported.

2.2.4.2. Supported operations

VMAX drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Create a volume from a snapshot.
VMAX drivers also support the following features:
  • FAST automated storage tiering policy.
  • Dynamic masking view creation.
  • Striped volume creation.

2.2.4.3. Set up the VMAX drivers

Procedure 2.1. To set up the EMC VMAX drivers

  1. Install the python-pywbem package for your distribution. See Section 2.2.4.3.1, “Install the python-pywbem package”.
  2. Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.
    For information, see Section 2.2.4.3.2, “Set up SMI-S” and the SMI-S release notes.
  3. Configure connectivity. For FC driver, see Section 2.2.4.3.5, “FC Zoning with VMAX”. For iSCSI driver, see Section 2.2.4.3.6, “iSCSI with VMAX”.
2.2.4.3.1. Install the python-pywbem package
# yum install pywbem
Copy to Clipboard Toggle word wrap
2.2.4.3.2. Set up SMI-S
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.
Note
You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.
SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.
Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.
2.2.4.3.3. cinder.conf configuration file
Make the following changes in /etc/cinder/cinder.conf.
Add the following entries, where 10.10.61.45 is the IP address of the VMAX iSCSI target:
enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name=ISCSI_backend
[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name=FC_backend
Copy to Clipboard Toggle word wrap
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.
Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend
Copy to Clipboard Toggle word wrap
By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.
Restart the cinder-volume service.
Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.
Add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
    <EcomServerIp>1.1.1.1</EcomServerIp>
    <EcomServerPort>00</EcomServerPort>
    <EcomUserName>user1</EcomUserName>
    <EcomPassword>password1</EcomPassword>
    <PortGroups>
      <PortGroup>OS-PORTGROUP1-PG</PortGroup>
      <PortGroup>OS-PORTGROUP2-PG</PortGroup>
    </PortGroups>
   <Array>111111111111</Array>
   <Pool>FC_GOLD1</Pool>
   <FastPolicy>GOLD1</FastPolicy>
</EMC>
Copy to Clipboard Toggle word wrap
Where:
  • EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is packaged with SMI-S.
  • EcomUserName and EcomPassword are credentials for the ECOM server.
  • PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).
  • The Array tag holds the unique VMAX array serial number.
  • The Pool tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
  • The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
2.2.4.3.5. FC Zoning with VMAX
Zone Manager is recommended when using the VMAX FC driver, especially for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.
2.2.4.3.6. iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).
  • Verify host is able to ping VMAX iSCSI target ports.

2.2.4.4. VMAX masking view and group naming info

Masking view names
Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions:
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
Copy to Clipboard Toggle word wrap
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
Copy to Clipboard Toggle word wrap
Initiator group names
For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format:
OS-[shortHostName]-I-IG (for iSCSI initiators)
Copy to Clipboard Toggle word wrap
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
Copy to Clipboard Toggle word wrap
Note
Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA port groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.
Storage group names
As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
Copy to Clipboard Toggle word wrap
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)
Copy to Clipboard Toggle word wrap

2.2.4.5. Concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.
Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.
$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
Copy to Clipboard Toggle word wrap

2.2.5. EMC VNX direct driver

EMC VNX direct driver (consists of EMCCLIISCSIDriver and EMCCLIFCDriver) supports both iSCSI and FC protocol. EMCCLIISCSIDriver (VNX iSCSI direct driver) and EMCCLIFCDriver (VNX FC direct driver) are separately based on the ISCSIDriver and FCDriver defined in Block Storage.
EMCCLIISCSIDriver and EMCCLIFCDriver perform the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command line interface used for management, diagnostics and reporting functions for VNX.

2.2.5.1. Supported OpenStack release

EMC VNX direct driver supports the Kilo release.

2.2.5.2. System requirements

  • VNX Operational Environment for Block version 5.32 or higher.
  • VNX Snapshot and Thin Provisioning license should be activated for VNX.
  • Navisphere CLI v7.32 or higher is installed along with the driver.

2.2.5.3. Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Modify consistency groups.

2.2.5.4. Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX direct driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver.
2.2.5.4.1. Install NaviSecCLI
Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment.
  • For all other variants of Linux, Navisphere CLI is available at Downloads for VNX2 Series or Downloads for VNX1 Series.
  • After installation, set the security level of Navisphere CLI to low:
    $ /opt/Navisphere/bin/naviseccli security -certificate -setLevel low
    Copy to Clipboard Toggle word wrap
2.2.5.4.2. Install Block Storage driver
Both EMCCLIISCSIDriver and EMCCLIFCDriver are provided in the installer package:
  • emc_vnx_cli.py
  • emc_cli_fc.py (for EMCCLIFCDriver)
  • emc_cli_iscsi.py (for EMCCLIISCSIDriver)
Copy the files above to the cinder/volume/drivers/emc/ directory of the OpenStack node(s) where cinder-volume is running.
2.2.5.4.3. FC zoning with VNX (EMCCLIFCDriver only)
A storage administrator must enable FC SAN auto zoning between all OpenStack nodes and VNX if FC SAN auto zoning is not enabled.
2.2.5.4.4. Register with VNX
Register the compute nodes with VNX to access the storage in VNX or enable initiator auto registration.
To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service(Block Storage nodes) must be registered with the VNX as well.
Steps mentioned below are for a compute node. Please follow the same steps for the Block Storage nodes also. The steps can be skipped if initiator auto registration is enabled.
Note
When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and then add the compute nodes' or Block Storage nodes' registered initiators into the storage group.
If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.
It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail).
2.2.5.4.4.1. EMCCLIFCDriver
Steps for EMCCLIFCDriver:
  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose hostname and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators.
    2. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  2. Register the wwn with more ports if needed.
2.2.5.4.4.2. EMCCLIISCSIDriver
Steps for EMCCLIISCSIDriver:
  1. On the compute node with IP address 10.10.61.1 and hostname myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):
    1. Start the iSCSI initiator service on the node
      # /etc/init.d/open-iscsi start
      Copy to Clipboard Toggle word wrap
    2. Discover the iSCSI target portals on VNX
      # iscsiadm -m discovery -t st -p 10.10.61.35
      Copy to Clipboard Toggle word wrap
    3. Enter /etc/iscsi
      # cd /etc/iscsi
      Copy to Clipboard Toggle word wrap
    4. Find out the iqn of the node
      # more initiatorname.iscsi
      Copy to Clipboard Toggle word wrap
  2. Login to VNX from the compute node using the target corresponding to the SPA port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
    Copy to Clipboard Toggle word wrap
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:
    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators .
    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:
      • Hostname : myhost1
      • IP : 10.10.61.1
      • Click Register
    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.
  4. Logout iSCSI on the node:
    # iscsiadm -m node -u
    Copy to Clipboard Toggle word wrap
  5. Login to VNX from the compute node using the target corresponding to the SPB port:
    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
    Copy to Clipboard Toggle word wrap
  6. In Unisphere register the initiator with the SPB port.
  7. Logout iSCSI on the node:
    # iscsiadm -m node -u
    Copy to Clipboard Toggle word wrap
  8. Register the iqn with more ports if needed.

2.2.5.5. Backend configuration

Make the following changes in the /etc/cinder/cinder.conf:
storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
san_secondary_ip = 10.10.72.42
#VNX user name
#san_login = username
#VNX user password
#san_password = password
#VNX user type. Valid values are: global(default), local and ldap.
#storage_vnx_authentication_type = ldap
#Directory path of the VNX security file. Make sure the security file is generated first.
#VNX credentials are not necessary when using security file.
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
#timeout in minutes
default_timeout = 10
#If deploying EMCCLIISCSIDriver:
#volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
#"node1hostname" and "node2hostname" shoule be the full hostnames of the nodes(Try command 'hostname').
#This option is for EMCCLIISCSIDriver only.
iscsi_initiators = {"node1hostname":["10.0.0.1", "10.0.0.2"],"node2hostname":["10.0.0.3"]}

[database]
max_pool_size = 20
max_overflow = 30
Copy to Clipboard Toggle word wrap
  • where san_ip is one of the SP IP addresses of the VNX array and san_secondary_ip is the other SP IP address of VNX array. san_secondary_ip is an optional field, and it serves the purpose of providing a high availability(HA) design. In case that one SP is down, the other SP can be connected automatically. san_ip is a mandatory field, which provides the main connection.
  • where Pool_01_SAS is the pool from which the user wants to create volumes. The pools can be created using Unisphere for VNX. Refer to the Section 2.2.5.18, “Multiple pools support” on how to manage multiple pools.
  • where storage_vnx_security_file_dir is the directory path of the VNX security file. Make sure the security file is generated following the steps in Section 2.2.5.6, “Authentication”.
  • where iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on all OpenStack nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.
  • Restart cinder-volume service to make the configuration change take effect.

2.2.5.6. Authentication

VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials.
The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.
  1. Find out the linux user id of the /usr/bin/cinder-volume processes. Assuming the service /usr/bin/cinder-volume is running by account cinder.
  2. Switch to root account
  3. Change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash in /etc/passwd (This temporary change is to make step 4 work).
  4. Save the credentials on behalf of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In below command, switch -secfilepath is used to specify the location to save the security file (assuming saving to directory /etc/secfile/array1).
    # su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath /etc/secfile/array1'
    Copy to Clipboard Toggle word wrap
    Save the security file to the different locations for different arrays except where the same credentials are shared between all arrays managed by the host. Otherwise, the credentials in the security file will be overwritten. If -secfilepath is not specified in the command above, the security file will be saved to the default location which is the home directory of the executor.
  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd.
  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf (normally it is /etc/cinder/cinder.conf). Add the option storage_vnx_security_file_dir and set its value to the directory path supplied with switch -secfilepath in step 4. Omit this option if -secfilepath is not used in step 4.
    #Directory path that contains the VNX security file. Generate the security file first
    storage_vnx_security_file_dir = /etc/secfile/array1
    Copy to Clipboard Toggle word wrap
  7. Restart cinder-volume service to make the change take effect.
Alternatively, the credentials can be specified in /etc/cinder/cinder.conf through the three options below:
#VNX user name
san_login = username
#VNX user password
san_password = password
#VNX user type. Valid values are: global, local and ldap. global is the default value
storage_vnx_authentication_type = ldap
Copy to Clipboard Toggle word wrap

2.2.5.7. Restriction of deployment

It does not suggest to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the vm instance's data access to the volume.

2.2.5.8. Restriction of volume extension

VNX does not support to extend the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the volume's status would change to error_extending.

2.2.5.9. Restriction of iSCSI attachment

The driver caches the iSCSI ports information. If the iSCSI port configurations are changed, the administrator should restart the cinder-volume service or wait 5 minutes before any volume attachment operation. Otherwise, the attachment may fail because the old iSCSI port configurations were used.
User can specify extra spec key storagetype:provisioning in volume type to set the provisioning type of a volume. The provisioning type can be thick, thin, deduplicated or compressed.
  • thick provisioning type means the volume is fully provisioned.
  • thin provisioning type means the volume is virtually provisioned.
  • deduplicated provisioning type means the volume is virtually provisioned and the deduplication is enabled on it. Administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX deduplication license should be activated on VNX first, and use key deduplication_support=True to let Block Storage scheduler find a volume back end which manages a VNX with deduplication license activated.
  • compressed provisioning type means the volume is virtually provisioned and the compression is enabled on it. Administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX compression license should be activated on VNX first, and the user should specify key compression_support=True to let Block Storage scheduler find a volume back end which manages a VNX with compression license activated. VNX does not support to create a snapshot on a compressed volume. If the user tries to create a snapshot on a compressed volume, the operation would fail and OpenStack would show the new snapshot in error state.
Here is an example about how to create a volume with provisioning type. Firstly create a volume type and specify storage pool in the extra spec, then create a volume with this volume type:
$ cinder type-create "ThickVolume"
$ cinder type-create "ThinVolume"
$ cinder type-create "DeduplicatedVolume"
$ cinder type-create "CompressedVolume"
$ cinder type-key "ThickVolume" set storagetype:provisioning=thick
$ cinder type-key "ThinVolume" set storagetype:provisioning=thin
$ cinder type-key "DeduplicatedVolume" set storagetype:provisioning=deduplicated deduplication_support=True
$ cinder type-key "CompressedVolume" set storagetype:provisioning=compressed compression_support=True
Copy to Clipboard Toggle word wrap
In the example above, four volume types are created: ThickVolume, ThinVolume, DeduplicatedVolume and CompressedVolume. For ThickVolume, storagetype:provisioning is set to thick. Similarly for other volume types. If storagetype:provisioning is not specified or an invalid value, the default value thick is adopted.
Volume type name, such as ThickVolume, is user-defined and can be any name. Extra spec key storagetype:provisioning shall be the exact name listed here. Extra spec value for storagetype:provisioning shall be thick, thin, deduplicated or compressed. During volume creation, if the driver finds storagetype:provisioning in the extra spec of the volume type, it will create the volume with the provisioning type accordingly. Otherwise, the volume will be thick as the default.

2.2.5.11. Fully automated storage tiering support

VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the extra spec key fast_support=True to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:
  • StartHighThenAuto (Default option)
  • Auto
  • HighestAvailable
  • LowestAvailable
  • NoMovement
Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume.
Here is an example about how to create a volume with tiering policy:
$ cinder type-create "AutoTieringVolume"
$ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True
$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True
Copy to Clipboard Toggle word wrap

2.2.5.12. FAST Cache support

VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key fast_cache_enabled to choose whether to create a volume on the volume back end which manages a pool with FAST Cache enabled. The value of the extra spec key fast_cache_enabled is either True or False. When creating a volume, if the key fast_cache_enabled is set to True in the volume type, the volume will be created by a back end which manages a pool with FAST Cache enabled.

2.2.5.13. Storage group automatic deletion

For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attached to the vm instances in a computer node will be put into the corresponding Storage Group. If destroy_empty_storage_group=True, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.

2.2.5.14. EMC storage-assisted volume migration

EMC VNX direct driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False volume_id host or cinder migrate volume_id host, cinder will try to leverage the VNX's native volume migration functionality.
In the following scenarios, VNX native volume migration will not be triggered:
  • Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
  • Volume is being migrated across arrays.

2.2.5.15. Initiator auto registration

If initiator_auto_registration=True, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered).
If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.

2.2.5.16. Initiator auto deregistration

Enabling storage group automatic deletion is the precondition of this functionality. If initiator_auto_deregistration=True is set, the driver will deregister all the iSCSI initiators of the host after its storage group is deleted.

2.2.5.17. Read-only volumes

OpenStack supports read-only volumes. The following command can be used to set a volume to read-only.
$ cinder readonly-mode-update volume True
Copy to Clipboard Toggle word wrap
After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.

2.2.5.18. Multiple pools support

The user configures a storage pool for a Block Storage back end (named as pool-based back end), so that the Block Storage back end uses only that storage pool.
If storage_vnx_pool_name is not given in the configuration file, the Block Storage back end uses all the pools on the VNX array, and the scheduler chooses the pool to place the volume based on its capacities and capabilities. This kind of Block Storage back end is named as array-based back end.
Here is an example about configuration of array-based back end:
san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first
storage_vnx_security_file_dir = /etc/secfile/array1
storage_vnx_authentication_type = global
naviseccli_path = /opt/Navisphere/bin/naviseccli
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
destroy_empty_storage_group = False
volume_backend_name = vnx_41
Copy to Clipboard Toggle word wrap
In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume.
Here is an example about creating the volume type:
$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
Copy to Clipboard Toggle word wrap

2.2.5.19. Volume number threshold

In VNX, there is a limit on the maximum number of pool volumes that can be created in the system. When the limit is reached, no more pool volumes can be created even if there is enough remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the limit, the back end will fail to create the corresponding volume.
The default value of the option check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.

2.2.5.20. FC SAN auto zoning

EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in back-end configuration section to enable this feature. For ZoneManager configuration, please refer to Section 2.6, “Fibre Channel Zone Manager”.

2.2.5.21. Multi-backend configuration

[DEFAULT]

enabled_backends = backendA, backendB

[backendA]

storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first.
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True

[backendB]
storage_vnx_pool_name = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True

[database]

max_pool_size = 20
max_overflow = 30
Copy to Clipboard Toggle word wrap
For more details on multi-backend, see OpenStack Cloud Administration Guide.

2.2.5.22. Force delete volumes in storage groups

Some available volumes may remain in storage groups on the VNX array due to some OpenStack timeout issues. But the VNX array does not allow the user to delete the volumes which are still in storage groups. The option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.
When force_delete_lun_in_storagegroup=True is set in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage groups on the VNX array.
The default value of force_delete_lun_in_storagegroup is False.

2.2.6. EMC XtremIO Block Storage driver configuration

The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtermIO Storage cluster.
This section explains how to configure and connect an OpenStack block storage host to an XtremIO storage cluster.

2.2.6.1. Support matrix

  • Xtremapp: Version 3.0 and 4.0

2.2.6.2. Supported operations

  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Manage and unmanage a volume
  • Get volume statistics

2.2.6.3. XtremIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf.
For a configuration example, refer to the configuration example.
2.2.6.3.1. XtremIO driver name
Configure the driver name by adding the following parameter:
  • For iSCSI volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOIscsiDriver
  • For Fibre Channel volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
2.2.6.3.2. XtremIO management server (XMS) IP
To retrieve the management IP, use the show-xms CLI command.
Configure the management IP by adding the following parameter: san_ip = XMS Management IP
2.2.6.3.3. XtremIO cluster name
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.
To retrieve the Cluster Name, run the show-clusters CLI command.
Configure the cluster name by adding the xtremio_cluster_name = Cluster-Name
Note
When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.
2.2.6.3.4. XtremIO user credentials
OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.
Refer to the XtremIO User Guide for details on user account management
Create an XMS account using either the XMS GUI or the add-user-accountCLI command.
Configure the user credentials by adding the following parameters:
san_login = XMS username
san_password = XMS username password

2.2.6.4. Multiple back ends

Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

2.2.6.5. Setting thin provisioning and multipathing parameters

To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
  • Thin Provisioning
    All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.
    The use_cow_images parameter in thenova.conffile should be set to False as follows:
    use_cow_images = false
  • Multipathing
    The use_multipath_for_image_xfer parameter in thecinder.conf file should be set to True as follows:
    use_multipath_for_image_xfer = true

2.2.6.6. Restarting OpenStack Block Storage

Save thecinder.conffile and restart cinder by running the following command:
$ openstack-service restart cinder-volume
Copy to Clipboard Toggle word wrap

2.2.6.7. Configuring CHAP

The XtremIO Block Storage driver supports CHAP initiator authentication. If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.
To set the CHAP initiator mode using CLI, run the following CLI command:
$ modify-chap chap-authentication-mode=initiator
Copy to Clipboard Toggle word wrap
The CHAP initiator mode can also be set via the XMS GUI
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
The CHAP initiator authentication credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.

2.2.6.8. Configuration example

cinder.conf example file
You can update the cinder.conf file by editing the necessary parameters as follows:
[Default]
enabled_backends = XtremIO

[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA
Copy to Clipboard Toggle word wrap

2.2.7. GlusterFS driver

GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster's homepage.
This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume operations, including snapshot/clone.
Note
You must use a Linux kernel of version 3.4 or greater (or version 2.6.32 or greater in Red Hat Enterprise Linux/CentOS 6.3+) when working with Gluster-based volumes. See Bug 1177103 for more information.
To use Block Storage with GlusterFS, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
Copy to Clipboard Toggle word wrap
The following table contains the configuration options supported by the GlusterFS driver.
Expand
Table 2.4. Description of GlusterFS storage configuration options
Configuration option = Default value Description
[DEFAULT]
glusterfs_mount_point_base = $state_path/mnt (StrOpt) Base dir containing mount points for gluster shares.
glusterfs_qcow2_volumes = False (BoolOpt) Create volumes as QCOW2 files rather than raw files.
glusterfs_shares_config = /etc/cinder/glusterfs_shares (StrOpt) File with the list of available gluster shares
glusterfs_sparsed_volumes = True (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time.

2.2.8. HDS HNAS iSCSI and NFS driver

This OpenStack Block Storage volume driver provides iSCSI and NFS support for Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.

2.2.8.1. Supported operations

The NFS and iSCSI drivers support these operations:
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.

2.2.8.2. HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to create storage pool(s), file system(s), and assign an EVS. Make sure that the file system used is not created as replication targets. Additionally:
For NFS:
Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to hide and disable access.
Also, configure the option norootsquash as "* (rw, norootsquash)", so cinder services can change the permissions of its volumes.
In order to use the hardware accelerated features of NFS HNAS, we recommend setting max-nfs-version to 3. Refer to HNAS command line reference to see how to configure this option.
For iSCSI:
You need to set an iSCSI domain.

2.2.8.3. Block storage host requirements

The HNAS driver is supported for Red Hat. The following packages must be installed:
  1. nfs-utils for Red Hat
  2. If you are not using SSH, you need the HDS SSC package (hds-ssc-v1.0-1) to communicate with an HNAS array using the SSC command. This utility package is available in the RPM package distributed with the hardware through physical media or it can be manually copied from the SMU to the Block Storage host.

2.2.8.4. Package installation

If you are installing the driver from a RPM or DEB package, follow the steps bellow:
  1. Install SSC:
    In Red Hat:
    # rpm -i hds-ssc-v1.0-1.rpm
    Copy to Clipboard Toggle word wrap
  2. Install the dependencies:
    In Red Hat:
    # yum install nfs-utils nfs-utils-lib
    Copy to Clipboard Toggle word wrap
  3. Configure the driver as described in the "Driver Configuration" section.
  4. Restart all cinder services (volume, scheduler and backup).

2.2.8.5. Driver configuration

The HDS driver supports the concept of differentiated services (also referred as quality of service) by mapping volume types to services provided through HNAS.
HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types and the use of multiple back ends. The driver maps up to four volume types into separated exports or file systems, and can support any number if using multiple back ends.
The configuration for the driver is read from an XML-formatted file (one per back end), which you need to create and set its path in the cinder.conf configuration file. Below are the configuration needed in the cinder.conf configuration file [1]:
[DEFAULT]
enabled_backends = hnas_iscsi1, hnas_nfs1
Copy to Clipboard Toggle word wrap
For HNAS iSCSI driver create this section:
[hnas_iscsi1]
volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-ISCSI
Copy to Clipboard Toggle word wrap
For HNAS NFS driver create this section:
[hnas_nfs1]
volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-NFS
Copy to Clipboard Toggle word wrap
The XML file has the following format:
<?xml version = "1.0" encoding = "UTF-8" ?>
  <config>
    <mgmt_ip0>172.24.44.15</mgmt_ip0>
    <hnas_cmd>ssc</hnas_cmd>
    <chap_enabled>False</chap_enabled>
    <ssh_enabled>False</ssh_enabled>
    <cluster_admin_ip0>10.1.1.1</cluster_admin_ip0>
    <username>supervisor</username>
    <password>supervisor</password>
    <svc_0>
      <volume_type>default</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-husvm</hdp>
    </svc_0>
    <svc_1>
      <volume_type>platinun</volume_type>
      <iscsi_ip>172.24.44.20</iscsi_ip>
      <hdp>fs01-platinun</hdp>
    </svc_1>
  </config>
Copy to Clipboard Toggle word wrap

2.2.8.6. HNAS volume driver XML configuration options

An OpenStack Block Storage node using HNAS drivers can have up to four services. Each service is defined by a svc_n tag (svc_0, svc_1, svc_2, or svc_3 [2], for example). These are the configuration options available for each service label:
Expand
Table 2.5. Configuration options for service labels
Option Type Default Description
volume_type
Required
default
When a create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.
iscsi_ip
Required only for iSCSI
An iSCSI IP address dedicated to the service.
hdp
Required
For iSCSI driver: virtual file system label associated with the service.
For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in /etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.
These are the configuration options available to the config section of the XML config file:
Expand
Table 2.6. Configuration options
Option Type Default Description
mgmt_ip0
Required
Management Port 0 IP address. Should be the IP address of the "Admin" EVS.
hnas_cmd
Optional
ssc
Command to communicate to HNAS array.
chap_enabled
Optional (iSCSI only)
True
Boolean tag used to enable CHAP authentication protocol.
username
Required
supervisor
It's always required on HNAS.
password
Required
supervisor
Password is always required on HNAS.
svc_0, svc_1, svc_2, svc_3
Optional
(at least one label has to be defined)
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.
cluster_admin_ip0
Optional if ssh_enabled is True
The address of HNAS cluster admin.
ssh_enabled
Optional
False
Enables SSH authentication between Block Storage host and the SMU.
ssh_private_key
Required if ssh_enabled is True
False
Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

2.2.8.7. Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to four types of them, as gold, platinun, silver and ssd, for example.
After creating the services in the XML configuration file, you must configure one volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the <volume_type> section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.
$ cinder type-create 'default'
$ cinder type-key 'default' set service_label = 'default'
$ cinder type-create 'platinun-tier'
$ cinder type-key 'platinun' set service_label = 'platinun'
Copy to Clipboard Toggle word wrap

2.2.8.8. Multi-back-end configuration

If you use multiple back ends and intend to enable the creation of a volume in a specific back end, you must configure volume types to set the volume_backend_name option to the appropriate back end. Then, create volume_type configurations with the same volume_backend_name .
$ cinder type-create 'iscsi'
$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$ cinder type-create 'nfs'
$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
Copy to Clipboard Toggle word wrap
You can deploy multiple OpenStack HNAS drivers instances that each control a separate HNAS array. Each service (svc_0, svc_1, svc_2, svc_3) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.

2.2.8.9. SSH configuration

Instead of using SSC on the Block Storage host and store its credential on the XML configuration file, HNAS driver supports SSH authentication. To configure that:
  1. If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
    $ mkdir -p /opt/hds/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
    Copy to Clipboard Toggle word wrap
  2. Change the owner of the key to cinder (or the user the volume service will be run):
    # chown -R cinder.cinder /opt/hds/ssh
    Copy to Clipboard Toggle word wrap
  3. Create the directory "ssh_keys" in the SMU server:
    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
    Copy to Clipboard Toggle word wrap
  4. Copy the public key to the "ssh_keys" directory:
    $ scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
    Copy to Clipboard Toggle word wrap
  5. Access the SMU server:
    $ ssh [manager|supervisor]@<smu-ip>
    Copy to Clipboard Toggle word wrap
  6. Run the command to register the SSH keys:
    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
    Copy to Clipboard Toggle word wrap
  7. Check the communication with HNAS in the Block Storage host:
    $ ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
    Copy to Clipboard Toggle word wrap
<cluster_admin_ip0> is "localhost" for single node deployments. This should return a list of available file systems on HNAS.

2.2.8.10. Editing the XML config file:

  1. Set the "username".
  2. Enable SSH adding the line "<ssh_enabled> True</ssh_enabled>" under "<config>" session.
  3. Set the private key path: "<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>" under "<config>" session.
  4. If the HNAS is in a multi-cluster configuration set "<cluster_admin_ip0>" to the cluster node admin IP. In a single node HNAS, leave it empty.
  5. Restart the cinder service.

2.2.8.11. Additional notes

  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.
  • After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
  • HNAS iSCSI driver, due to an HNAS limitation, allows only 32 volumes per target.
  • On Red Hat, if the system is configured to use SELinux, you need to set "virt_use_nfs = on" for NFS driver work properly.

2.2.9. HDS HUS iSCSI driver

This Block Storage volume driver provides iSCSI support for HUS (Hitachi Unified Storage) arrays such as, HUS-110, HUS-130, and HUS-150.

2.2.9.1. System requirements

Use the HDS hus-cmd command to communicate with an HUS array. You can download this utility package from the HDS support site (https://hdssupport.hds.com/).
Platform: Ubuntu 12.04 LTS or newer.

2.2.9.2. Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.

2.2.9.3. Configuration

The HDS driver supports the concept of differentiated services, where a volume type can be associated with the fine-tuned performance characteristics of an HDP— the dynamic pool where volumes are created[3]. For instance, an HDP can consist of fast SSDs to provide speed. HDP can provide a certain reliability based on things like its RAID level characteristics. HDS driver maps volume type to the volume_type option in its configuration file.
Configuration is read from an XML-format file. Examples are shown for single and multi back-end cases.
Note
  • Configuration is read from an XML file. This example shows the configuration for single back-end and for multi-back-end cases.
  • It is not recommended to manage an HUS array simultaneously from multiple OpenStack Block Storage instances or servers. [4]
Expand
Table 2.7. Description of HDS HUS iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]
hds_cinder_config_file = /opt/hds/hus/cinder_hus_conf.xml (StrOpt) The configuration file for the Cinder HDS driver for HUS
HUS setup
Before using iSCSI services, use the HUS UI to create an iSCSI domain for each EVS providing iSCSI services.
Single back-end
In a single back-end deployment, only one OpenStack Block Storage instance runs on the OpenStack Block Storage server and controls one HUS array: this deployment requires these configuration files:
  1. Set the hds_cinder_config_file option in the /etc/cinder/cinder.conf file to use the HDS volume driver. This option points to a configuration file.[5]
    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
    Copy to Clipboard Toggle word wrap
  2. Configure hds_cinder_config_file at the location specified previously. For example, /opt/hds/hus/cinder_hds_conf.xml:
    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.16</mgmt_ip0>
        <mgmt_ip1>172.17.44.17</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>default</volume_type>
            <iscsi_ip>172.17.39.132</iscsi_ip>
            <hdp>9</hdp>
        </svc_0>
        <snapshot>
            <hdp>13</hdp>
        </snapshot>
        <lun_start>
            3000
        </lun_start>
        <lun_end>
            4000
        </lun_end>
    </config>
    Copy to Clipboard Toggle word wrap
Multi back-end
In a multi back-end deployment, more than one OpenStack Block Storage instance runs on the same server. In this example, two HUS arrays are used, possibly providing different storage performance:
  1. Configure /etc/cinder/cinder.conf: the hus1 hus2 configuration blocks are created. Set the hds_cinder_config_file option to point to a unique configuration file for each block. Set the volume_driver option for each back-end to cinder.volume.drivers.hds.hds.HUSDriver
    enabled_backends=hus1,hus2
    
    [hus1]
    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml
    volume_backend_name=hus-1
    
    [hus2]
    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml
    volume_backend_name=hus-2
    Copy to Clipboard Toggle word wrap
  2. Configure /opt/hds/hus/cinder_hus1_conf.xml:
    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.16</mgmt_ip0>
        <mgmt_ip1>172.17.44.17</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>regular</volume_type>
            <iscsi_ip>172.17.39.132</iscsi_ip>
            <hdp>9</hdp>
        </svc_0>
        <snapshot>
            <hdp>13</hdp>
        </snapshot>
        <lun_start>
            3000
        </lun_start>
        <lun_end>
            4000
        </lun_end>
    </config>
    Copy to Clipboard Toggle word wrap
  3. Configure the /opt/hds/hus/cinder_hus2_conf.xml file:
    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.20</mgmt_ip0>
        <mgmt_ip1>172.17.44.21</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>platinum</volume_type>
            <iscsi_ip>172.17.30.130</iscsi_ip>
            <hdp>2</hdp>
        </svc_0>
        <snapshot>
            <hdp>3</hdp>
        </snapshot>
        <lun_start>
            2000
        </lun_start>
        <lun_end>
            3000
        </lun_end>
    </config>
    Copy to Clipboard Toggle word wrap
Type extra specs: volume_backend and volume type
If you use volume types, you must configure them in the configuration file and set the volume_backend_name option to the appropriate back-end. In the previous multi back-end example, the platinum volume type is served by hus-2, and the regular volume type is served by hus-1.
cinder type-key regular set volume_backend_name=hus-1
cinder type-key platinum set volume_backend_name=hus-2
Copy to Clipboard Toggle word wrap
Non differentiated deployment of HUS arrays
You can deploy multiple OpenStack Block Storage instances that each control a separate HUS array. Each instance has no volume type associated with it. The OpenStack Block Storage filtering algorithm selects the HUS array with the largest available free space. In each configuration file, you must define the default volume_type in the service labels.

2.2.9.4. HDS iSCSI volume driver configuration options

These details apply to the XML format configuration file that is read by HDS volume driver. These differentiated service labels are predefined: svc_0, svc_1, svc_2, and svc_3[6]. Each respective service label associates with these parameters and tags:
  1. volume-types: A create_volume call with a certain volume type shall be matched up with this tag. default is special in that any service associated with this type is used to create volume when no other labels match. Other labels are case sensitive and should exactly match. If no configured volume_types match the incoming requested type, an error occurs in volume creation.
  2. HDP, the pool ID associated with the service.
  3. An iSCSI port dedicated to the service.
Typically a OpenStack Block Storage volume instance has only one such service label. For example, any svc_0, svc_1, svc_2, or svc_3 can be associated with it. But any mix of these service labels can be used in the same instance [7].
Expand
Table 2.8. Configuration options
Option Type Default Description
mgmt_ip0
Required
Management Port 0 IP address
mgmt_ip1
Required
Management Port 1 IP address
hus_cmd
Optional
hus_cmd is the command used to communicate with the HUS array. If it is not set, the default value is hus-cmd.
username
Optional
Username is required only if secure mode is used
password
Optional
Password is required only if secure mode is used
svc_0, svc_1, svc_2, svc_3
Optional
(at least one label has to be defined)
Service labels: these four predefined names help four different sets of configuration options -- each can specify iSCSI port address, HDP and a unique volume type.
snapshot
Required
A service label which helps specify configuration for snapshots, such as, HDP.
volume_type
Required
volume_type tag is used to match volume type. Default meets any type of volume_type, or if it is not specified. Any other volume_type is selected if exactly matched during create_volume.
iscsi_ip
Required
iSCSI port IP address where volume attaches for this volume type.
hdp
Required
HDP, the pool number where volume, or snapshot should be created.
lun_start
Optional
0
LUN allocation starts at this number.
lun_end
Optional
4096
LUN allocation is up to, but not including, this number.

2.2.10. IBM Storwize family and SVC volume driver

The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.

2.2.10.1. Configure the Storwize family and SVC system

Network configuration
The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.
If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume's preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system; you do not need to provide these iSCSI IP addresses directly to the driver.
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.
Note
OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the Nova host (outside the scope of this documentation), multipath is enabled.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. If the storwize_svc_multipath_enabled flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the next section). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver.
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.
iSCSI CHAP authentication
If using iSCSI for data access and the storwize_svc_iscsi_chap_enabled is set to True, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
Configure storage pools
Each instance of the IBM Storwize/SVC driver allocates all volumes in a single pool. The pool should be created in advance and be provided to the driver using the storwize_svc_volpool_name configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the next section.
Configure user authentication for the driver
The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH).
Note
Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.
If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the "choose file" option in the Storwize family or SVC management GUI under "SSH public key". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.
Create a SSH key pair with OpenSSH
You can create an SSH key pair using OpenSSH, by running:
$ ssh-keygen -t rsa
Copy to Clipboard Toggle word wrap
The command prompts for a file to save the key pair. For example, if you select 'key' as the filename, two files are created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
Note
Ensure that Cinder has read permissions on the private key file.

2.2.10.2. Configure the Storwize family and SVC driver

Enable the Storwize family and SVC driver
Set the volume driver to the Storwize family and SVC driver by setting the volume_driver option in cinder.conf as follows:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
Copy to Clipboard Toggle word wrap
Storwize family and SVC driver options in cinder.conf
The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.
Expand
Table 2.9. List of configuration flags for Storwize storage and SVC driver
Flag name Type Default Description
san_ip
Required
Management IP or host name
san_ssh_port
Optional
22
Management port
san_login
Required
Management login username
san_password
Required [a]
Management login password
san_private_key
Required [a]
Management login SSH private key
storwize_svc_volpool_name
Required
Default pool name for volumes
storwize_svc_vol_rsize
Optional
2
Initial physical allocation (percentage) [b]
storwize_svc_vol_warning
Optional
0 (disabled)
Space allocation warning threshold (percentage) [b]
storwize_svc_vol_autoexpand
Optional
True
Enable or disable volume auto expand [c]
storwize_svc_vol_grainsize
Optional
256
Volume grain size [b] in KB
storwize_svc_vol_compression
Optional
False
Enable or disable Real-time Compression [d]
storwize_svc_vol_easytier
Optional
True
Enable or disable Easy Tier [e]
storwize_svc_vol_iogrp
Optional
0
The I/O group in which to allocate vdisks
storwize_svc_flashcopy_timeout
Optional
120
FlashCopy timeout threshold [f] (seconds)
storwize_svc_connection_protocol
Optional
iSCSI
Connection protocol to use (currently supports 'iSCSI' or 'FC')
storwize_svc_iscsi_chap_enabled
Optional
True
Configure CHAP authentication for iSCSI connections
storwize_svc_multipath_enabled
Optional
False
Enable multipath for FC connections [g]
storwize_svc_multihost_enabled
Optional
True
Enable mapping vdisks to multiple hosts [h]
[a] The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
[b] The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[c] Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[d] Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[e] Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[f] The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[g] Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
[h] This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
Expand
Table 2.10. Description of IBM Storwise driver configuration options
Configuration option = Default value Description
[DEFAULT]
storwize_svc_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
storwize_svc_connection_protocol = iSCSI (StrOpt) Connection protocol (iSCSI/FC)
storwize_svc_flashcopy_timeout = 120 (IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared. Maximum value is 600 seconds (10 minutes)
storwize_svc_iscsi_chap_enabled = True (BoolOpt) Configure CHAP authentication for iSCSI connections (Default: Enabled)
storwize_svc_multihostmap_enabled = True (BoolOpt) Allows vdisk to multi host mapping
storwize_svc_multipath_enabled = False (BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_npiv_compatibility_mode = False (BoolOpt) Indicate whether svc driver is compatible for NPIV setup. If it is compatible, it will allow no wwpns being returned on get_conn_fc_wwpns during initialize_connection
storwize_svc_stretched_cluster_partner = None (StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2"
storwize_svc_vol_autoexpand = True (BoolOpt) Storage system autoexpand parameter for volumes (True/False)
storwize_svc_vol_compression = False (BoolOpt) Storage system compression option for volumes
storwize_svc_vol_easytier = True (BoolOpt) Enable Easy Tier for volumes
storwize_svc_vol_grainsize = 256 (IntOpt) Storage system grain size parameter for volumes (32/64/128/256)
storwize_svc_vol_iogrp = 0 (IntOpt) The I/O group in which to allocate volumes
storwize_svc_vol_rsize = 2 (IntOpt) Storage system space-efficiency parameter for volumes (percentage)
storwize_svc_vol_warning = 0 (IntOpt) Storage system threshold for volume capacity warnings (percentage)
storwize_svc_volpool_name = volpool (StrOpt) Storage system storage pool for volumes
Placement with volume types
The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities: to indicate that the scheduler should use them. The following extra specs are supported:
  • capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in lssystem, an underscore, and the name of the pool (mdisk group). For example:
    capabilities:volume_back-end_name=myV7000_openstackpool
    Copy to Clipboard Toggle word wrap
  • capabilities:compression_support - Specify a back-end according to compression support. A value of True should be used to request a back-end that supports compression, and a value of False will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying True does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:
    capabilities:compression_support='<is> True'
    Copy to Clipboard Toggle word wrap
  • capabilities:easytier_support - Similar semantics as the compression_support key, but for specifying according to support of the Easy Tier feature. Example syntax:
    capabilities:easytier_support='<is> True'
    Copy to Clipboard Toggle word wrap
  • capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are iSCSI and FC. This extra specs value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> used in the previous examples.
    capabilities:storage_protocol='<in> FC'
    Copy to Clipboard Toggle word wrap
Configure per-volume creation options
Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the "capabilities" scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the "drivers" scope.
The following extra specs keys are supported by the IBM Storwize/SVC driver:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • multipath
  • iogrp
These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, rsize=2 or compression=False.
Example: Volume types
In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:
$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
Copy to Clipboard Toggle word wrap
We can then create a 50GB volume using this type:
$ cinder create --display-name "compressed volume" --volume-type compressed 50
Copy to Clipboard Toggle word wrap
Volume types can be used, for example, to provide users with different
  • performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
  • resiliency levels (such as, allocating volumes in pools with different RAID levels)
  • features (such as, enabling/disabling Real-time Compression)
QOS
The Storwize driver provides QOS support for storage volumes by controlling the I/O amount. QOS is enabled by editing the etc/cinder/cinder.conf file and setting the storwize_svc_allow_tenant_qos to True.
There are three ways to set the Storwize IOThrotting parameter for storage volumes:
  • Add the qos:IOThrottling key into a QOS specification and associate it with a volume type.
  • Add the qos:IOThrottling key into an extra specification with a volume type.
  • Add the qos:IOThrottling key to the storage volume metadata.
Note
If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.
Migrate volumes
In the context of OpenStack Block Storage's volume migration feature, the IBM Storwize/SVC driver enables the storage's virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.
Note
To enable this feature, both pools involved in a given volume migration must have the same values for extent_size. If the pools have different values for extent_size, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.
Extend volumes
The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes without snapshots.
Snapshots and clones
Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.
Volume retype
The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:
  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • iogrp
Note
When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.
Note
To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.

2.2.11. IBM XIV and DS8000 volume driver

The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV and IBM DS8000 storage systems over Fiber channel and iSCSI.
Set the following in your cinder.conf, and use the following options to configure it.
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver
Copy to Clipboard Toggle word wrap
Expand
Table 2.11. Description of IBM XIV and DS8000 volume driver configuration options
Configuration option = Default value Description
[DEFAULT]
san_clustername = (StrOpt) Cluster name to use for creating volumes
san_ip = (StrOpt) IP address of SAN controller
san_login = admin (StrOpt) Username for SAN controller
san_password = (StrOpt) Password for SAN controller
xiv_chap = disabled (StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled)
xiv_ds8k_connection_type = iscsi (StrOpt) Connection type to the IBM Storage Array
xiv_ds8k_proxy = xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy (StrOpt) Proxy driver that connects to the IBM Storage Array
Note
For full documentation refer to IBM's online documentation available at http://pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html.

2.2.12. LVM

The default volume back-end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.
Note
The Block Storage iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity.
Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
Set the following in your cinder.conf configuration file, and use the following options to configure for iSCSI transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iscsi
Copy to Clipboard Toggle word wrap
Use the following options to configure for the iSER transport:
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    iscsi_protocol = iser
Copy to Clipboard Toggle word wrap
Expand
Table 2.12. Description of LVM configuration options
Configuration option = Default value Description
[DEFAULT]
lvm_conf_file = /etc/cinder/lvm.conf (StrOpt) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists).
lvm_mirrors = 0 (IntOpt) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space
lvm_type = default (StrOpt) Type of LVM volumes to deploy
volume_group = cinder-volumes (StrOpt) Name for the VG that will contain exported volumes

2.2.13. NetApp unified driver

The NetApp unified driver is a block storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, OpenStack Block Storage has introduced the concept of "storage pools", in which a single OpenStack Block Storage back end may present one or more logical storage resource pools from which OpenStack Block Storage will select as a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some "scheduling" logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new OpenStack Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the OpenStack Block Storage scheduler, as each NetApp storage container is directly exposed to the OpenStack Block Storage scheduler as a storage pool; whereas previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the OpenStack Block Storage volume would be provisioned into.

2.2.13.1. NetApp clustered Data ONTAP storage family

The NetApp clustered Data ONTAP storage family represents a configuration group which provides OpenStack compute instances access to clustered Data ONTAP storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Copy to Clipboard Toggle word wrap
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.
Expand
Table 2.13. Description of NetApp cDOT iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
2.2.13.1.2. NetApp NFS configuration for clustered Data ONTAP
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from OpenStack Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Copy to Clipboard Toggle word wrap
Expand
Table 2.14. Description of NetApp cDOT NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_copyoffload_tool_path = None (StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Table 2.19, “Description of NFS storage configuration options”.
Note
If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
  • The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
  • The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
To use this feature, you must configure the Image Service, as follows:
  • Set the default_store configuration option to file.
  • Set the filesystem_store_datadir configuration option to the path to the Image Service NFS export.
  • Set the show_image_direct_url configuration option to True.
  • Set the show_multiple_locations configuration option to True.
    Important
    If configured without the proper policy settings, a non-admin user of the Image Service can replace active image data (that is, switch out a current image without other users knowing). See the OSSN announcement (recommended actions) for configuration information: https://wiki.openstack.org/wiki/OSSN/OSSN-0065
  • Set the filesystem_store_metadata_file configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:
    {
        "share_location": "nfs://192.168.0.1/myGlanceExport",
        "mount_point": "/var/lib/glance/images",
        "type": "nfs"
    }
    Copy to Clipboard Toggle word wrap
To use this feature, you must configure the Block Storage service, as follows:
  • Set the netapp_copyoffload_tool_path configuration option to the path to the NetApp Copy Offload binary.
  • Set the glance_api_version configuration option to 2.
Important
This feature requires that:
  • The storage system must have Data ONTAP v8.2 or greater installed.
  • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
  • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
Tip
To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, please visit the Utility Toolchest page at the NetApp Support portal (login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
Extra specs enable vendors to specify extra filter criteria that the Block Storage scheduler uses when it determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with OpenStack Block Storage volume types to ensure that OpenStack Block Storage volumes are created on storage back ends that have certain properties. For example, when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with OpenStack Block Storage volume types, so that when users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. For example, the back ends have the available space or extra specs. You can use the specs in the following table when you define OpenStack Block Storage volume types by using the cinder type-key command.
Expand
Table 2.15. Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
Extra spec Type Description
netapp_raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp_disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group[a] String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored[b] Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup[b] Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression[b] Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned[b] Boolean Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.
[a] Please note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[b] In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.

2.2.13.2. NetApp Data ONTAP operating in 7-Mode storage family

The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides OpenStack compute instances access to 7-Mode storage systems. At present it can be configured in OpenStack Block Storage to work with iSCSI and NFS storage protocols.
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
Copy to Clipboard Toggle word wrap
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.
Expand
Table 2.16. Description of NetApp 7-Mode iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
netapp_volume_list = None (StrOpt) This option is only utilized when the storage protocol is configured to use iSCSI or FC. This option is used to restrict provisioning to the specified controller volumes. Specify the value of this option to be a comma separated list of NetApp controller volume names to be used for provisioning.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Copy to Clipboard Toggle word wrap
Expand
Table 2.17. Description of NetApp 7-Mode NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]
expiry_thres_minutes = 720 (IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (StrOpt) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
thres_avl_size_perc_start = 20 (IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Table 2.19, “Description of NFS storage configuration options”.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

2.2.13.3. NetApp E-Series storage family

The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in OpenStack Block Storage to work with the iSCSI storage protocol.
2.2.13.3.1. NetApp iSCSI configuration for E-Series
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems for provisioning and managing the SAN block storage entity; that is, a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from OpenStack Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the OpenStack Block Storage driver for E-Series. In order for OpenStack Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:
  • The use_multipath_for_image_xfer option should be set to True in the cinder.conf file within the driver-specific stanza (for example, [myDriver]).
  • The iscsi_use_multipath option should be set to True in the nova.conf file within the [libvirt] stanza.
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Copy to Clipboard Toggle word wrap
Note
To use the E-Series driver, you must override the default value of netapp_storage_family with eseries.
Note
To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.
Expand
Table 2.18. Description of NetApp E-Series driver configuration options
Configuration option = Default value Description
[DEFAULT]
netapp_controller_ips = None (StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.
netapp_eseries_host_type = linux_dm_mp (StrOpt) This option is used to define how the controllers in the E-Series storage array will work with the particular operating system on the hosts that are connected to it.
netapp_login = None (StrOpt) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (StrOpt) Password for the administrative user account specified in the netapp_login option.
netapp_sa_password = None (StrOpt) Password for the NetApp E-Series storage array.
netapp_server_hostname = None (StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_storage_family = ontap_cluster (StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_pools = None (StrOpt) This option is used to restrict provisioning to the specified storage pools. Only dynamic disk pools are currently supported. Specify the value of this option to be a comma separated list of disk pool names to be used for provisioning.
netapp_transport_type = http (StrOpt) The transport protocol used when communicating with the storage system or proxy server.
netapp_webservice_path = /devmgr/v2 (StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.
2.2.13.4.1. Upgraded NetApp drivers
This section describes how to update OpenStack Block Storage configuration from a pre-Havana release to the unified driver format.
Driver upgrade configuration
  1. NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
    Copy to Clipboard Toggle word wrap
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = iscsi
    Copy to Clipboard Toggle word wrap
  2. NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
    Copy to Clipboard Toggle word wrap
    NetApp unified driver configuration.
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = nfs
    Copy to Clipboard Toggle word wrap
  3. NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
    Copy to Clipboard Toggle word wrap
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = iscsi
    Copy to Clipboard Toggle word wrap
  4. NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
    Copy to Clipboard Toggle word wrap
    NetApp unified driver configuration
    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = nfs
    Copy to Clipboard Toggle word wrap
2.2.13.4.2. Deprecated NetApp drivers
This section lists the NetApp drivers in earlier releases that are deprecated in Havana.
  1. NetApp iSCSI driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
    Copy to Clipboard Toggle word wrap
  2. NetApp NFS driver for clustered Data ONTAP.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
    Copy to Clipboard Toggle word wrap
  3. NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
    Copy to Clipboard Toggle word wrap
  4. NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
    Copy to Clipboard Toggle word wrap
Note
For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.

2.2.14. NFS driver

The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.

2.2.14.1. How the NFS driver works

The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.

2.2.14.2. Enable the NFS driver and related options

To use Cinder with the NFS driver, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
Copy to Clipboard Toggle word wrap
The following table contains the options supported by the NFS driver.
Expand
Table 2.19. Description of NFS storage configuration options
Configuration option = Default value Description
[DEFAULT]
nfs_mount_attempts = 3 (IntOpt) The number of attempts to mount nfs shares before raising an error. At least one attempt will be made to mount an nfs share, regardless of the value specified.
nfs_mount_options = None (StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details.
nfs_mount_point_base = $state_path/mnt (StrOpt) Base dir containing mount points for nfs shares.
nfs_oversub_ratio = 1.0 (FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid.
nfs_shares_config = /etc/cinder/nfs_shares (StrOpt) File with the list of available nfs shares
nfs_sparsed_volumes = True (BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time.
nfs_used_ratio = 0.95 (FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
Note
As of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the mount.nfs command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS v3.0 if necessary. If the nfs_mount_options configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config configuration option, the mount will be attempted as requested with no subsequent attempts.

2.2.14.3. How to use the NFS driver

  1. Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
    • 192.168.1.200:/storage
    • 192.168.1.201:/storage
    • 192.168.1.202:/storage
    This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
  2. Add your list of NFS servers to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/shares.txt, then:
    # cat /etc/cinder/shares.txt
    192.168.1.200:/storage
    192.168.1.201:/storage
    192.168.1.202:/storage
    Copy to Clipboard Toggle word wrap
    Comments are allowed in this file. They begin with a #.
  3. Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in shares.txt. For this example, /var/lib/cinder/nfs is used. You can, of course, use the default value of $state_path/mnt.
  4. Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in shares.txt. The name of each directory is a hashed name:
    # ls /var/lib/cinder/nfs/
    ...
    46c5db75dc3a3a50a10bfd1a456a9f3f
    ...
    Copy to Clipboard Toggle word wrap
  5. You can now create volumes as you normally would:
    $ nova volume-create --display-name myvol 5
    # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
    volume-a8862558-e6d6-4648-b5df-bb84f31c8935
    Copy to Clipboard Toggle word wrap
    This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.

NFS driver notes

  • cinder-volume manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one cinder-volume service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one cinder-volume service is needed as well as potentially more than one NFS server.
  • Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Please test accordingly.
  • Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.
    Note
    Regular IO flushing and syncing still stands.

2.2.15. SolidFire

The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182         # the address of your MVIP
san_login = sfadmin           # your cluster admin login
san_password = sfpassword     # your cluster admin password
sf_account_prefix = ''        # prefix for tenant account creation on solidfire cluster (see warning below)
Copy to Clipboard Toggle word wrap
Warning
The SolidFire driver creates a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant that accesses the cluster through the Volume API. Unfortunately, this account formation results in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. HA installations can return an Account Not Found error because the call to the SolidFire cluster is not always going to be sent from the same node. In installations where the cinder-volume service moves to a new node, the same issue can occur when you perform operations on existing volumes, such as clone, extend, delete, and so on.
Note
Set the sf_account_prefix option to an empty string ('') in the cinder.conf file. This setting results in unique accounts being created on the SolidFire cluster, but the accounts are prefixed with the tenant-id or any unique identifier that you choose and are independent of the host where the cinder-volume service resides.
Expand
Table 2.20. Description of SolidFire driver configuration options
Configuration option = Default value Description
[DEFAULT]
sf_account_prefix = None (StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostsname (previous default behavior). The default is NO prefix.
sf_allow_template_caching = True (BoolOpt) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls.
sf_allow_tenant_qos = False (BoolOpt) Allow tenants to specify QOS on create
sf_api_port = 443 (IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True (BoolOpt) Set 512 byte emulation on volume creation;
sf_template_account_name = openstack-vtemplate (StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).


[1] The configuration file location may differ.
[2] There is no relative precedence or weight among these four labels.
[3] Do not confuse differentiated services with the OpenStack Block Storage volume services.
[4] It is okay to manage multiple HUS arrays by using multiple OpenStack Block Storage instances (or servers).
[5] The configuration file location may differ.
[6] Each of these four labels has no relative precedence or weight.
[7] The get_volume_stats() always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat