此内容没有您所选择的语言版本。
Chapter 2. Block Storage
2.1. Volume drivers 复制链接链接已复制到粘贴板!
cinder-volume
service, use the parameters described in these sections.
volume_driver
flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
2.1.1. Ceph RADOS Block Device (RBD) 复制链接链接已复制到粘贴板!
RADOS 复制链接链接已复制到粘贴板!
- Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data).You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (
Btrfs
) pooling. By default, the following pools are created: data, metadata, and RBD. - Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
- Monitor (MON). A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three
ceph-mon
daemons on separate servers.
Btrfs
for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.
Btrfs
, ensure that you use the correct version (see Ceph Dependencies).
Ways to store, use, and expose data 复制链接链接已复制到粘贴板!
- RADOS. Use as an object, default storage mechanism.
- RBD. Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
- CephFS. Use as a file, POSIX-compliant file system.
- RADOS Gateway. OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
- librados, and its related C/C++ bindings.
- RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options 复制链接链接已复制到粘贴板!
volume_tmp_dir
option has been deprecated and replaced by image_conversion_dir
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rados_connect_timeout = -1
|
(IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. |
rados_connection_interval = 5
|
(IntOpt) Interval value (in seconds) between connection retries to ceph cluster. |
rados_connection_retries = 3
|
(IntOpt) Number of retries if connection to ceph cluster failed. |
rbd_ceph_conf =
|
(StrOpt) Path to the ceph configuration file |
rbd_cluster_name = ceph
|
(StrOpt) The name of ceph cluster |
rbd_flatten_volume_from_snapshot = False
|
(BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot |
rbd_max_clone_depth = 5
|
(IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. |
rbd_pool = rbd
|
(StrOpt) The RADOS pool where rbd volumes are stored |
rbd_secret_uuid = None
|
(StrOpt) The libvirt uuid of the secret for the rbd_user volumes |
rbd_store_chunk_size = 4
|
(IntOpt) Volumes will be chunked into objects of this size (in megabytes). |
rbd_user = None
|
(StrOpt) The RADOS client name for accessing rbd volumes - only set when using cephx authentication |
volume_tmp_dir = None
|
(StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, use image_conversion_dir instead. |
2.1.2. Dell EqualLogic volume driver 复制链接链接已复制到粘贴板!
Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Clone a volume.
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
- Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.
/etc/cinder/cinder.conf
file (see Section 2.3, “Block Storage sample configuration files” for reference).
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
eqlx_chap_login = admin
|
(StrOpt) Existing CHAP account name. Note that this option is deprecated in favour of "chap_username" as specified in cinder/volume/driver.py and will be removed in next release. |
eqlx_chap_password = password
|
(StrOpt) Password for specified CHAP account name. Note that this option is deprecated in favour of "chap_password" as specified in cinder/volume/driver.py and will be removed in the next release |
eqlx_cli_max_retries = 5
|
(IntOpt) Maximum retry count for reconnection. Default is 5. |
eqlx_cli_timeout = 30
|
(IntOpt) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of "ssh_conn_timeout" as specified in cinder/volume/drivers/san/san.py and will be removed in M release. |
eqlx_group_name = group-0
|
(StrOpt) Group name to use for creating volumes. Defaults to "group-0". |
eqlx_pool = default
|
(StrOpt) Pool in which volumes will be created. Defaults to "default". |
eqlx_use_chap = False
|
(BoolOpt) Use CHAP authentication for targets. Note that this option is deprecated in favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be removed in next release. |
/etc/cinder/cinder.conf
configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:
Example 2.1. Default (single-instance) configuration
- IP_EQLX
- The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
- SAN_UNAME
- The user name to login to the Group manager via SSH at the
san_ip
. Default user name isgrpadmin
. - SAN_PW
- The corresponding password of SAN_UNAME. Not used when
san_private_key
is set. Default password ispassword
. - EQLX_GROUP
- The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is
group-0
. - EQLX_POOL
- The pool where the Block Storage service will create volumes and snapshots. Default pool is
default
. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group. - EQLX_UNAME
- The CHAP login account for each volume in a pool, if
eqlx_use_chap
is set totrue
. Default account name ischapadmin
. - EQLX_PW
- The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
- SAN_KEY_PATH (optional)
- The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when
san_password
is set. There is no default value.
san_thin_provision = true
setting.
Example 2.2. Multi back-end Dell EqualLogic configuration
- Thin provisioning for SAN volumes is enabled (
san_thin_provision = true
). This is recommended when setting up Dell EqualLogic back ends. - Each Dell EqualLogic back-end configuration (
[backend1]
and[backend2]
) has the same required settings as a single back-end configuration, with the addition ofvolume_backend_name
. - The
san_ssh_port
option is set to its default value, 22. This option sets the port used for SSH. - The
ssh_conn_timeout
option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH. - The
IP_EQLX1
andIP_EQLX2
refer to the IP addresses used to reach the Dell EqualLogic Group ofbackend1
andbackend2
through SSH, respectively.
cinder.conf
file.
Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach (map), and detach (unmap) volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
Extra spec options 复制链接链接已复制到粘贴板!
storagetype:storageprofile
with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.
High Priority
and Low Priority
Storage Profiles:
cinder type-create "GoldVolumeType" cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority cinder type-create "BronzeVolumeType" cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority
$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority
iSCSI configuration 复制链接链接已复制到粘贴板!
Example 2.3. Sample iSCSI Configuration
Fibre Channel configuration 复制链接链接已复制到粘贴板!
Example 2.4. Sample FC configuration
Driver options 复制链接链接已复制到粘贴板!
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dell_sc_api_port = 3033
|
(IntOpt) Dell API port |
dell_sc_server_folder = openstack
|
(StrOpt) Name of the server folder to use on the Storage Center |
dell_sc_ssn = 64702
|
(IntOpt) Storage Center System Serial Number |
dell_sc_verify_cert = False
|
(BoolOpt) Enable HTTPS SC certificate verification. |
dell_sc_volume_folder = openstack
|
(StrOpt) Name of the volume folder to use on the Storage Center |
2.1.4. EMC ScaleIO Block Storage driver configuration 复制链接链接已复制到粘贴板!
2.1.4.1. Support matrix 复制链接链接已复制到粘贴板!
2.1.4.2. Deployment prerequisites 复制链接链接已复制到粘贴板!
- ScaleIO Gateway must be installed and accessible in the network. For installation steps, refer to the Preparing the installation Manager and the Gateway section in ScaleIO Deployment Guide. See Section 2.1.4.2.1, “Official documentation”.
- ScaleIO Data Client (SDC) must be installed on all OpenStack nodes.
2.1.4.2.1. Official documentation 复制链接链接已复制到粘贴板!
- Go to the ScaleIO product documentation page.
- From the left-side panel, select the relevant version (1.32 or 2.0).
- Search for "ScaleIO Installation Guide 1.32" or "ScaleIO 2.0 Deployment Guide" accordingly.
2.1.4.3. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, clone, attach, and detach volumes
- Create and delete volume snapshots
- Create a volume from a snapshot
- Copy an image to a volume
- Copy a volume to an image
- Extend a volume
- Get volume statistics
- Manage and unmanage a volume
- Create, list, update, and delete consistency groups
- Create, list, update, and delete consistency group snapshots
2.1.4.4. ScaleIO QoS support 复制链接链接已复制到粘贴板!
cinder.api.contrib.qos_specs_manage
QoS specs extension module:
minBWS
maxBWS
-
maxBWS
- The QoS I/O issue bandwidth rate limit in KBs. If not set, the I/O issue bandwidth rate has no limit. The setting must be a multiple of 1024.
-
maxIOPS
- The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit. The setting must be larger than 10.
2.1.4.5. ScaleIO thin provisioning support 复制链接链接已复制到粘贴板!
sio:provisioning_type = thin\thick
sio:provisioning_type = thin\thick
2.1.4.6. ScaleIO Block Storage driver configuration 复制链接链接已复制到粘贴板!
cinder.conf
file by adding the configuration below under the [DEFAULT]
section of the file in case of a single back end, or under a separate section in case of multiple back ends (for example [ScaleIO]). The configuration file is usually located at /etc/cinder/cinder.conf
.
2.1.4.6.1. ScaleIO driver name 复制链接链接已复制到粘贴板!
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
2.1.4.6.2. ScaleIO MDM server IP 复制链接链接已复制到粘贴板!
san_ip = ScaleIO GATEWAY IP
san_ip = ScaleIO GATEWAY IP
2.1.4.6.3. ScaleIO Protection Domain name 复制链接链接已复制到粘贴板!
sio_protection_domain_name = ScaleIO Protection Domain
sio_protection_domain_name = ScaleIO Protection Domain
2.1.4.6.4. ScaleIO Storage Pool name 复制链接链接已复制到粘贴板!
sio_storage_pool_name = ScaleIO Storage Pool
sio_storage_pool_name = ScaleIO Storage Pool
2.1.4.6.5. ScaleIO Storage Pools 复制链接链接已复制到粘贴板!
sio_storage_pools = Comma-separated list of protection domain:storage pool name
sio_storage_pools = Comma-separated list of protection domain:storage pool name
2.1.4.6.6. ScaleIO user credentials 复制链接链接已复制到粘贴板!
san_login = ScaleIO username san_password = ScaleIO password
san_login = ScaleIO username
san_password = ScaleIO password
2.1.4.7. Multiple back ends 复制链接链接已复制到粘贴板!
2.1.4.8. Configuration example 复制链接链接已复制到粘贴板!
cinder.conf
file by editing the necessary parameters as follows:
2.1.4.9. Configuration options 复制链接链接已复制到粘贴板!
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sio_force_delete = False
|
(BoolOpt) Whether to allow force delete. |
sio_protection_domain_id = None
|
(StrOpt) Protection domain id. |
sio_protection_domain_name = None
|
(StrOpt) Protection domain name. |
sio_rest_server_port = 443
|
(StrOpt) REST server port. |
sio_round_volume_capacity = True
|
(BoolOpt) Whether to round volume capacity. |
sio_server_certificate_path = None
|
(StrOpt) Server certificate path. |
sio_storage_pool_id = None
|
(StrOpt) Storage pool id. |
sio_storage_pool_name = None
|
(StrOpt) Storage pool name. |
sio_storage_pools = None
|
(StrOpt) Storage pools. |
sio_unmap_volume_before_deletion = False
|
(BoolOpt) Whether to unmap volume before deletion. |
sio_verify_server_certificate = False
|
(BoolOpt) Whether to verify server certificate. |
2.1.5. EMC VMAX iSCSI and FC drivers 复制链接链接已复制到粘贴板!
EMCVMAXISCSIDriver
and EMCVMAXFCDriver
, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.
2.1.5.1. System requirements 复制链接链接已复制到粘贴板!
2.1.5.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Retype a volume.
- Create a volume from a snapshot.
- FAST automated storage tiering policy.
- Dynamic masking view creation.
- Striped volume creation.
2.1.5.3. Set up the VMAX drivers 复制链接链接已复制到粘贴板!
Procedure 2.1. To set up the EMC VMAX drivers
- Install the python-pywbem package for your distribution. To install the python-pywbem package for Red Hat Enterprise Linux, CentOS, or Fedora:
yum install pywbem
# yum install pywbem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.For information, see Section 2.1.5.3.1, “Set up SMI-S” and the SMI-S release notes.
- Change configuration files. See Section 2.1.5.3.2, “
cinder.conf
configuration file” and Section 2.1.5.3.3, “cinder_emc_config_CONF_GROUP_ISCSI.xml
configuration file”. - Configure connectivity. For FC driver, see Section 2.1.5.3.4, “FC Zoning with VMAX”. For iSCSI driver, see Section 2.1.5.3.5, “iSCSI with VMAX”.
2.1.5.3.1. Set up SMI-S 复制链接链接已复制到粘贴板!
/opt/emc/ECIM/ECOM/bin
on Linux and C:\Program Files\EMC\ECIM\ECOM\bin
on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.
2.1.5.3.2. cinder.conf configuration file 复制链接链接已复制到粘贴板!
/etc/cinder/cinder.conf
.
10.10.61.45
is the IP address of the VMAX iSCSI target:
CONF_GROUP_ISCSI
and CONF_GROUP_FC
. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml
.
cinder.conf
and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:
cinder type-create VMAX_ISCSI cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend cinder type-create VMAX_FC cinder type-key VMAX_FC set volume_backend_name=FC_backend
$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend
VMAX_ISCSI
is associated with the ISCSI_backend, and the type VMAX_FC
is associated with the FC_backend.
cinder-volume
service.
/etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
file. You do not need to restart the service for this change.
EcomServerIp
andEcomServerPort
are the IP address and port number of the ECOM server which is packaged with SMI-S.EcomUserName
andEcomPassword
are credentials for the ECOM server.PortGroups
supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections.PortGroups
can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from thePortGroup
list, to evenly distribute load across the set of groups provided. Make sure that thePortGroups
set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).- The
Array
tag holds the unique VMAX array serial number. - The
Pool
tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy. - The
FastPolicy
tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting theFastPolicy
tag means FAST is not enabled on the provided storage pool.
2.1.5.3.4. FC Zoning with VMAX 复制链接链接已复制到粘贴板!
2.1.5.3.5. iSCSI with VMAX 复制链接链接已复制到粘贴板!
- Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).
- Verify host is able to ping VMAX iSCSI target ports.
2.1.5.4. VMAX masking view and group naming info 复制链接链接已复制到粘贴板!
Masking view names 复制链接链接已复制到粘贴板!
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
Initiator group names 复制链接链接已复制到粘贴板!
OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)
FA port groups 复制链接链接已复制到粘贴板!
Storage group names 复制链接链接已复制到粘贴板!
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)
2.1.5.5. Concatenated or striped volumes 复制链接链接已复制到粘贴板!
storagetype:stripecount
representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped
volume type will be striped and made up of 4 meta members.
cinder type-create GoldStriped cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND cinder type-key GoldStriped set storagetype:stripecount=4
$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
2.1.6. EMC VNX driver 复制链接链接已复制到粘贴板!
EMCCLIISCSIDriver
(VNX iSCSI driver) and EMCCLIFCDriver
(VNX FC driver) are separately based on the ISCSIDriver
and FCDriver
defined in Block Storage.
2.1.6.1. Overview 复制链接链接已复制到粘贴板!
2.1.6.1.1. System requirements 复制链接链接已复制到粘贴板!
- VNX Operational Environment for Block version 5.32 or higher.
- VNX Snapshot and Thin Provisioning license should be activated for VNX.
- Navisphere CLI v7.32 or higher is installed along with the driver.
2.1.6.1.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Clone a volume.
- Extend a volume.
- Migrate a volume.
- Retype a volume.
- Get volume statistics.
- Create and delete consistency groups.
- Create, list, and delete consistency group snapshots.
- Modify consistency groups.
- Efficient non-disruptive volume backup.
2.1.6.2. Preparation 复制链接链接已复制到粘贴板!
2.1.6.2.2. Check array software 复制链接链接已复制到粘贴板!
Feature | Software Required |
All
|
ThinProvisioning
|
All
|
VNXSnapshots
|
FAST cache support
|
FASTCache
|
Create volume with type
compressed
|
Compression
|
Create volume with type
deduplicated
|
Deduplication
|
2.1.6.2.3. Install EMC VNX driver 复制链接链接已复制到粘贴板!
EMCCLIISCSIDriver
and EMCCLIFCDriver
are included in the Block Storage installer package:
emc_vnx_cli.py
emc_cli_fc.py
(forEMCCLIFCDriver
)emc_cli_iscsi.py
(forEMCCLIISCSIDriver
)
2.1.6.2.4. Network configuration 复制链接链接已复制到粘贴板!
initiator_auto_registration=True
configuration to avoid register the ports manually. Check the detail of the configuration in Section 2.1.6.3, “Backend configuration” for reference.
2.1.6.3. Backend configuration 复制链接链接已复制到粘贴板!
/etc/cinder/cinder.conf
file:
2.1.6.3.1. Minimum configuration 复制链接链接已复制到粘贴板!
EMCCLIFCDriver
to EMCCLIISCSIDriver
if your are using the iSCSI driver.
2.1.6.3.2. Multi-backend configuration 复制链接链接已复制到粘贴板!
EMCCLIFCDriver
to EMCCLIISCSIDriver
if your are using the iSCSI driver.
2.1.6.3.3. Required configurations 复制链接链接已复制到粘贴板!
2.1.6.3.3.1. IP of the VNX Storage Processors 复制链接链接已复制到粘贴板!
san_ip = <IP of VNX Storage Processor A> san_secondary_ip = <IP of VNX Storage Processor B>
san_ip = <IP of VNX Storage Processor A>
san_secondary_ip = <IP of VNX Storage Processor B>
2.1.6.3.3.2. VNX login credentials 复制链接链接已复制到粘贴板!
- Use plain text username and password.
san_login = <VNX account with administrator role> san_password = <password for VNX account> storage_vnx_authentication_type = global
san_login = <VNX account with administrator role>
san_password = <password for VNX account>
storage_vnx_authentication_type = global
storage_vnx_authentication_type
are: global
(default), local
, ldap
- Use Security file
storage_vnx_security_file_dir=<path to security file>
storage_vnx_security_file_dir=<path to security file>
2.1.6.3.3.3. Path to your Unisphere CLI 复制链接链接已复制到粘贴板!
naviseccli_path = /opt/Navisphere/bin/naviseccli
naviseccli_path = /opt/Navisphere/bin/naviseccli
2.1.6.3.3.4. Driver name 复制链接链接已复制到粘贴板!
- For the FC Driver, add the following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
- For iSCSI Driver, add following option:
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
2.1.6.3.4. Optional configurations 复制链接链接已复制到粘贴板!
2.1.6.3.4.1. VNX pool names 复制链接链接已复制到粘贴板!
storage_vnx_pool_names = pool 1, pool 2
storage_vnx_pool_names = pool 1, pool 2
2.1.6.3.4.2. Initiator auto registration 复制链接链接已复制到粘贴板!
initiator_auto_registration=True
, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list
is not specified in cinder.conf.
io_port_list
, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list
instead of all target ports.
- Example for FC ports:
io_port_list=a-1,B-3
io_port_list=a-1,B-3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow a
orB
is Storage Processor, number1
and3
are Port ID. - Example for iSCSI ports:
io_port_list=a-1-0,B-3-0
io_port_list=a-1-0,B-3-0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow a
orB
is Storage Processor, the first numbers1
and3
are Port ID and the second number0
is Virtual Port ID
- Rather than de-registered, the registered ports will be simply bypassed whatever they are in 'io_port_list' or not.
- The driver will raise an exception if ports in
io_port_list
are not existed in VNX during startup.
2.1.6.3.4.3. Force delete volumes in storage group 复制链接链接已复制到粘贴板!
available
volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup
is introduced to allow the user to delete the available
volumes in this tricky situation.
force_delete_lun_in_storagegroup=True
in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage group on the VNX array.
force_delete_lun_in_storagegroup
is False
.
2.1.6.3.4.4. Over subscription in thin provisioning 复制链接链接已复制到粘贴板!
max_over_subscription_ratio
in the back-end section is the ratio of provisioned capacity over total capacity.
max_over_subscription_ratio
is greater than 1.0, the provisioned capacity can exceed the total capacity. The default value of max_over_subscription_ratio
is 20.0, which means the provisioned capacity can be 20 times the total physical capacity.
2.1.6.3.4.5. Storage group automatic deletion 复制链接链接已复制到粘贴板!
destroy_empty_storage_group=True
, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True
unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.
2.1.6.3.4.6. Initiator auto deregistration 复制链接链接已复制到粘贴板!
initiator_auto_deregistration=True
is set, the driver will deregister all the initiators of the host after its storage group is deleted.
2.1.6.3.4.7. FC SAN auto zoning 复制链接链接已复制到粘贴板!
zoning_mode
to fabric
in DEFAULT
section to enable this feature. For ZoneManager configuration, refer to Block Storage official guide.
2.1.6.3.4.8. Volume number threshold 复制链接链接已复制到粘贴板!
check_max_pool_luns_threshold
is False
. When check_max_pool_luns_threshold=True
, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.
2.1.6.3.4.9. iSCSI initiators 复制链接链接已复制到粘贴板!
iscsi_initiators
is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Nova/Cinder nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.
host1
with 10.0.0.1
and 10.0.0.2
. And it will connect host2
with 10.0.0.3
.
host1
in the example) should be the output of command hostname
.
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
2.1.6.3.4.10. Default timeout 复制链接链接已复制到粘贴板!
default_timeout = 10
default_timeout = 10
2.1.6.3.4.11. Max LUNs per storage group 复制链接链接已复制到粘贴板!
max_luns_per_storage_group
specify the max number of LUNs in a storage group. Default value is 255. It is also the max value supportedby VNX.
2.1.6.3.4.12. Ignore pool full threshold 复制链接链接已复制到粘贴板!
ignore_pool_full_threshold
is set to True
, driver will force LUN creation even if the full threshold of pool is reached. Default to False
2.1.6.4. Extra spec options 复制链接链接已复制到粘贴板!
cinder type-create "demoVolumeType"
$ cinder type-create "demoVolumeType"
cinder type-key "demoVolumeType" set provisioning:type=thin
$ cinder type-key "demoVolumeType" set provisioning:type=thin
2.1.6.4.1. Provisioning type 复制链接链接已复制到粘贴板!
- Key:
provisioning:type
- Possible Values:
thick
Volume is fully provisioned.Example 2.5. creating a
thick
volume type:cinder type-create "ThickVolumeType" cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
$ cinder type-create "ThickVolumeType" $ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow thin
Volume is virtually provisionedExample 2.6. creating a
thin
volume type:cinder type-create "ThinVolumeType" cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
$ cinder type-create "ThinVolumeType" $ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow deduplicated
Volume isthin
and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specifydeduplication_support=True
to let Block Storage scheduler find the proper volume back end.Example 2.7. creating a
deduplicated
volume type:cinder type-create "DeduplicatedVolumeType" cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
$ cinder type-create "DeduplicatedVolumeType" $ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow compressed
Volume isthin
and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX , and usecompression_support=True
to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.Example 2.8. creating a
compressed
volume type:cinder type-create "CompressedVolumeType" cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
$ cinder type-create "CompressedVolumeType" $ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Default:
thick
provisioning:type
replaces the old spec key storagetype:provisioning
. The latter one will be obsoleted in the next release. If both provisioning:type
and storagetype:provisioning
are set in the volume type, the value of provisioning:type
will be used.
2.1.6.4.2. Storage tiering support 复制链接链接已复制到粘贴板!
- Key:
storagetype:tiering
- Possible Values:
StartHighThenAuto
Auto
HighestAvailable
LowestAvailable
NoMovement
- Default:
StartHighThenAuto
storagetype:tiering
to set the tiering policy of a volume and use the key fast_support='<is> True'
to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering
:
Example 2.9. creating a volume types with tiering policy:
cinder type-create "ThinVolumeOnLowestAvaibleTier" cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
2.1.6.4.3. FAST cache support 复制链接链接已复制到粘贴板!
- Key:
fast_cache_enabled
- Possible Values:
True
False
- Default:
False
True
is specified.
2.1.6.4.4. Snap-copy 复制链接链接已复制到粘贴板!
- Key:
copytype:snap
- Possible Values:
True
False
- Default:
False
copytype:snap=True
in the extra specs of its volume type. Then the new volume cloned from the source or copied from the snapshot for the source, will be in fact a snap-copy instead of a full copy. If a full copy is needed, retype/migration can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.
cinder type-create "SnapCopy" cinder type-key "SnapCopy" set copytype:snap=True
$ cinder type-create "SnapCopy"
$ cinder type-key "SnapCopy" set copytype:snap=True
cinder metadata-show <volume>
$ cinder metadata-show <volume>
copytype:snap=True
is not allowed in the volume type of a consistency group.- Clone and snapshot creation are not allowed on a copied volume created through the snap-copy before it is converted to a full copy.
- The number of snap-copy volume created from a source volume is limited to 255 at one point in time.
- The source volume which has snap-copy volume can not be deleted.
2.1.6.4.5. Pool name 复制链接链接已复制到粘贴板!
- Key:
pool_name
- Possible Values: name of the storage pool managed by cinder
- Default: None
Example 2.10. Creating the volume type:
cinder type-create "HighPerf" cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
2.1.6.4.6. Obsoleted extra specs in Mitaka 复制链接链接已复制到粘贴板!
storagetype:provisioning
storagetype:pool
2.1.6.5. Advanced features 复制链接链接已复制到粘贴板!
2.1.6.5.1. Read-only volumes 复制链接链接已复制到粘贴板!
cinder readonly-mode-update <volume> True
$ cinder readonly-mode-update <volume> True
2.1.6.5.2. Efficient non-disruptive volume backup 复制链接链接已复制到粘贴板!
- Backup creation for a snap-copy volume is not allowed if the volume status is
in-use
since snapshot cannot be taken from this volume.
2.1.6.6. Best practice 复制链接链接已复制到粘贴板!
2.1.6.6.1. Multipath setup 复制链接链接已复制到粘贴板!
- Install
multipath-tools
,sysfsutils
andsg3-utils
on nodes hosting Nova-Compute and Cinder-Volume services (Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should bedevice-mapper-multipath
,sysfsutils
andsg3_utils
). - Specify
use_multipath_for_image_xfer=true
in cinder.conf for each FC/iSCSI back end. - Specify
iscsi_use_multipath=True
inlibvirt
section ofnova.conf
. This option is valid for both iSCSI and FC driver.
/etc/multipath.conf
.
user_friendly_names
is not specified in the configuration and thus it will take the default value no
. It is NOT recommended to set it to yes
because it may fail operations such as VM live migration.
faulty_device_cleanup.py
mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. See VNX faulty device cleanup for detailed usage and the script.
2.1.6.7. Restrictions and limitations 复制链接链接已复制到粘贴板!
2.1.6.7.1. iSCSI port cache 复制链接链接已复制到粘贴板!
periodic_interval
in cinder.conf
) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.
2.1.6.7.2. No extending for volume with snapshots 复制链接链接已复制到粘贴板!
error_extending
.
cinder upload-to-image --force True
is used against an in-use volume. Otherwise, cinder upload-to-image --force True
will terminate the data access of the vm instance to the volume.
2.1.6.7.4. Storage group with host names in VNX 复制链接链接已复制到粘贴板!
2.1.6.7.5. EMC storage-assisted volume migration 复制链接链接已复制到粘贴板!
cinder migrate --force-host-copy False <volume_id> <host>
or cinder migrate <volume_id> <host>
, cinder will try to leverage the VNX's native volume migration functionality.
- Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
- Volume is to be migrated across arrays.
2.1.6.8. Appendix 复制链接链接已复制到粘贴板!
2.1.6.8.1. Authenticate by security file 复制链接链接已复制到粘贴板!
- Find out the Linux user id of the
cinder-volume
processes. Assuming the servicecinder-volume
is running by the accountcinder
. - Run
su
as root user. - In
/etc/passwd
, changecinder:x:113:120::/var/lib/cinder:/bin/false
tocinder:x:113:120::/var/lib/cinder:/bin/bash
(This temporary change is to make step 4 work.) - Save the credentials on behave of
cinder
user to a security file (assuming the array credentials areadmin/admin
inglobal
scope). In the command below, the '-secfilepath' switch is used to specify the location to save the security file.su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
# su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Change
cinder:x:113:120::/var/lib/cinder:/bin/bash
back tocinder:x:113:120::/var/lib/cinder:/bin/false
in/etc/passwd
- Remove the credentials options
san_login
,san_password
andstorage_vnx_authentication_type
from cinder.conf. (normally it is/etc/cinder/cinder.conf
). Add optionstorage_vnx_security_file_dir
and set its value to the directory path of your security file generated in step 4. Omit this option if-secfilepath
is not used in step 4. - Restart the
cinder-volume
service to validate the change.
2.1.6.8.2. Register FC port with VNX 复制链接链接已复制到粘贴板!
initiator_auto_registration=False
.
cinder-volume
service (Block Storage nodes) must be registered with the VNX as well.
- Assume
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
is the WWN of a FC initiator port name of the compute node whose hostname and IP aremyhost1
and10.10.61.1
. Register20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
in Unisphere:- Login to Unisphere, go to.
- Refresh and wait until the initiator
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
with SP PortA-1
appears. - Click the CLARiiON/VNX and enter the hostname (which is the output of the linux commandbutton, select
hostname
) and IP address:- Hostname :
myhost1
- IP :
10.10.61.1
- Click
- Then host
10.10.61.1
will appear under as well.
- Register the wwn with more ports if needed.
2.1.6.8.3. Register iSCSI port with VNX 复制链接链接已复制到粘贴板!
initiator_auto_registration=False
.
- On the compute node with IP address
10.10.61.1
and hostnamemyhost1
, execute the following commands (assuming10.10.61.35
is the iSCSI target):- Start the iSCSI initiator service on the node
/etc/init.d/open-iscsi start
# /etc/init.d/open-iscsi start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Discover the iSCSI target portals on VNX
iscsiadm -m discovery -t st -p 10.10.61.35
# iscsiadm -m discovery -t st -p 10.10.61.35
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enter
/etc/iscsi
cd /etc/iscsi
# cd /etc/iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Find out the iqn of the node
more initiatorname.iscsi
# more initiatorname.iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Login to VNX from the compute node using the target corresponding to the SPA port:
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Assume
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
is the initiator name of the compute node. Registeriqn.1993-08.org.debian:01:1a2b3c4d5f6g
in Unisphere:- Login to Unisphere, go to.
- Refresh and wait until the initiator
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
with SP PortA-8v0
appears. - Click the CLARiiON/VNX and enter the hostname (which is the output of the linux commandbutton, select
hostname
) and IP address:- Hostname :
myhost1
- IP :
10.10.61.1
- Click
- Then host
10.10.61.1
will appear under as well.
- Logout iSCSI on the node:
iscsiadm -m node -u
# iscsiadm -m node -u
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Login to VNX from the compute node using the target corresponding to the SPB port:
iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In Unisphere register the initiator with the SPB port.
- Logout iSCSI on the node:
iscsiadm -m node -u
# iscsiadm -m node -u
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the iqn with more ports if needed.
2.1.7. EMC XtremIO Block Storage driver configuration 复制链接链接已复制到粘贴板!
2.1.7.1. Support matrix 复制链接链接已复制到粘贴板!
- Xtremapp: Version 3.0 and 4.0
2.1.7.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, clone, attach, and detach volumes
- Create and delete volume snapshots
- Create a volume from a snapshot
- Copy an image to a volume
- Copy a volume to an image
- Extend a volume
- Manage and unmanage a volume
- Get volume statistics
2.1.7.3. XtremIO Block Storage driver configuration 复制链接链接已复制到粘贴板!
cinder.conf
file by adding the configuration below under the [DEFAULT]
section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf
.
2.1.7.3.1. XtremIO driver name 复制链接链接已复制到粘贴板!
- For iSCSI
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOIscsiDriver
- For Fibre Channel
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
2.1.7.3.2. XtremIO management server (XMS) IP 复制链接链接已复制到粘贴板!
san_ip = XMS Management IP
2.1.7.3.3. XtremIO cluster name 复制链接链接已复制到粘贴板!
xtremio_cluster_name = Cluster-Name
2.1.7.3.4. XtremIO user credentials 复制链接链接已复制到粘贴板!
san_login = XMS username
san_password = XMS username password
2.1.7.4. Multiple back ends 复制链接链接已复制到粘贴板!
- Thin ProvisioningAll XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the
max_over_subscription_ratio
parameter.Theuse_cow_images
parameter in thenova.conf
file should be set to False as follows:use_cow_images = false
- MultipathingThe
use_multipath_for_image_xfer
parameter in thecinder.conf
file should be set to True as follows:use_multipath_for_image_xfer = true
2.1.7.6. Restarting OpenStack Block Storage 复制链接链接已复制到粘贴板!
cinder.conf
file and restart cinder by running the following command:
openstack-service restart cinder-volume
$ openstack-service restart cinder-volume
2.1.7.7. Configuring CHAP 复制链接链接已复制到粘贴板!
modify-chap chap-authentication-mode=initiator
$ modify-chap chap-authentication-mode=initiator
2.1.7.8. Configuration example 复制链接链接已复制到粘贴板!
cinder.conf example file 复制链接链接已复制到粘贴板!
cinder.conf
file by editing the necessary parameters as follows:
2.1.8. Fujitsu ETERNUS DX driver 复制链接链接已复制到粘贴板!
System requirements
- Firmware version V10L30 or later is required.
- An Advanced Copy Feature license is required to create a snapshot and a clone.
- The pywbem should be installed on the Controller node.
Supported operations
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume. [1]
- Get volume statistics.
2.1.8.1. Configure the Fujitsu ETERNUS device 复制链接链接已复制到粘贴板!
Before you can define the Fujitsu ETERNUS device as a Block Storage back end, you need to configure storage pools and ports on the device first. Consult your device documentation for details on each step:
- Set up a LAN connection between the Controller nodes (where the Block Storage service is hosted) and MNT ports of the ETERNUS device.
- Set up a SAN connection between the Compute nodes and CA ports of the ETERNUS device.
- Log in to the ETERNUS device using an account with the Admin role.
- Enable the SMI-S of ETERNUS DX.
- Register an Advanced Copy Feature license and configure the copy table size.
Create a storage pool for volumes. This pool will be used later in the EternusPool setting in Section 2.1.8.2, “Configuring the Back End”.
NoteIf you want to create volume snapshots on a different storage pool, create a storage pool for that as well. This pool will be used in the EternusSnapPool setting in Section 2.1.8.2, “Configuring the Back End”.
- Create a Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for the create a snapshot function.
Configure storage ports to be used by the Block Storage service. Then:
- Set those ports to CA mode.
Enable the host-affinity settings of those storage ports. To enable host-affinity, run the following from the ETERNUS CLI for each port:
CLI> set PROTO-parameters -host-affinity enable -port CM# CA# PORT
CLI> set PROTO-parameters -host-affinity enable -port CM# CA# PORT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where: * PROTO defines which storage protocol is in use, as in fc (Fibre Channel) or iscsi. * CM# CA# refer to the controller enclosure where the port is located. * PORT is the port number.
2.1.8.2. Configuring the Back End 复制链接链接已复制到粘贴板!
cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDrive
(fibre channel)cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
(iSCSI)
volume_driver
to the corresponding driver and cinder_eternus_config_file
to point to the back end's XML configuration file. For example, if your fibre channel back end settings are defined in /etc/cinder/eternus-dx.xml
, use:
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
cinder_eternus_config_file
, then the driver will use cinder_eternus_config_file = etc/cinder/cinder_fujitsu_eternus_dx.xml
by default.
- EternusIP
- IP address of the SMI-S connection of the ETERNUS device. Specifically, use the IP address of the MNT port of the device.
- EternusPort
- port number for the SMI-S connection port of the ETERNUS device.
- EternusUser
- User name to be used for the SMI-S connection (EternusIP).
- EternusPassword
- Corresponding password of EternusUser on EternusIP.
- EternusPool
- Name of the storage pool created for volumes (from Section 2.1.8.1, “Configure the Fujitsu ETERNUS device”). Specifically, use the pool’s RAID Group name or TPP name in the ETERNUS device.
- EternusSnapPool
- Name of the storage pool created for volume snapshots (from Section 2.1.8.1, “Configure the Fujitsu ETERNUS device”). Specifically, use the pool’s RAID Group name in the ETERNUS device. If you did not create a different pool for snapshots, use the same value as EternusPool.
- EternusISCSIIP
- (ISCSI only) IP address for iSCSI connections to the ETERNUS device. You can specify multiple IPs by creating an entry for each one.
2.1.9. HDS HNAS iSCSI and NFS driver 复制链接链接已复制到粘贴板!
2.1.9.1. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Get volume statistics.
- Manage and unmanage a volume.
2.1.9.2. HNAS storage requirements 复制链接链接已复制到粘贴板!
replication target
. Additionally:
- For NFS:
- Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to
hide and disable access
.Also, in the "Access Configuration" set the optionnorootsquash
, e.g."* (rw, norootsquash)",
so HNAS cinder driver can change the permissions of its volumes.In order to use the hardware accelerated features of NFS HNAS, we recommend settingmax-nfs-version
to 3. Refer to HNAS command line reference to see how to configure this option. - For iSCSI:
- You need to set an iSCSI domain.
2.1.9.3. Block storage host requirements 复制链接链接已复制到粘贴板!
2.1.9.4. Package installation 复制链接链接已复制到粘贴板!
- Install the dependencies:
yum install nfs-utils nfs-utils-lib
# yum install nfs-utils nfs-utils-lib
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the driver as described in the Section 2.1.9.5, “Driver configuration” section.
- Restart all cinder services (volume, scheduler and backup).
2.1.9.5. Driver configuration 复制链接链接已复制到粘贴板!
cinder.conf
configuration file. Below are the configuration needed in the cinder.conf
configuration file [2]:
[DEFAULT] enabled_backends = hnas_iscsi1, hnas_nfs1
[DEFAULT]
enabled_backends = hnas_iscsi1, hnas_nfs1
[hnas_iscsi1] volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HDSISCSIDriver hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml volume_backend_name = HNAS-ISCSI
[hnas_iscsi1]
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-ISCSI
[hnas_nfs1] volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HDSNFSDriver hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml volume_backend_name = HNAS-NFS
[hnas_nfs1]
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-NFS
2.1.9.6. HNAS volume driver XML configuration options 复制链接链接已复制到粘贴板!
svc_n
tag (svc_0
, svc_1
, svc_2
, or svc_3
[3], for example). These are the configuration options available for each service label:
Option | Type | Default | Description |
volume_type
|
Required
|
default
|
When a
create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.
|
iscsi_ip
|
Required only for iSCSI
|
|
An iSCSI IP address dedicated to the service.
|
hdp
|
Required
|
|
For iSCSI driver: virtual file system label associated with the service.
For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.
Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in
/etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.
|
config
section of the XML config file:
Option | Type | Default | Description |
mgmt_ip0
|
Required
|
|
Management Port 0 IP address. Should be the IP address of the "Admin" EVS.
|
hnas_cmd
|
Optional
|
ssc
|
Command to communicate to HNAS array.
|
chap_enabled
|
Optional (iSCSI only)
|
True
|
Boolean tag used to enable CHAP authentication protocol.
|
username
|
Required
|
supervisor
|
It's always required on HNAS.
|
password
|
Required
|
supervisor
|
Password is always required on HNAS.
|
svc_0, svc_1, svc_2, svc_3
|
Optional
|
(at least one label has to be defined)
|
Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.
|
cluster_admin_ip0
|
Optional if
ssh_enabled is True
|
The address of HNAS cluster admin.
|
|
ssh_enabled
|
Optional
|
False
|
Enables SSH authentication between Block Storage host and the SMU.
|
ssh_private_key
|
Required if
ssh_enabled is True
|
False
|
Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using
ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.
|
2.1.9.7. Service labels 复制链接链接已复制到粘贴板!
volume_type
per service. Each volume_type
must have the metadata service_label
with the same name configured in the <volume_type>
section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.
cinder type-create default cinder type-key default set service_label=default cinder type-create platinum-tier cinder type-key platinum set service_label=platinum
$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum
2.1.9.8. Multi-back-end configuration 复制链接链接已复制到粘贴板!
volume_backend_name
option to the appropriate back end. Then, create volume_type
configurations with the same volume_backend_name
.
cinder type-create 'iscsi' cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI' cinder type-create 'nfs' cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
$ cinder type-create 'iscsi'
$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$ cinder type-create 'nfs'
$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'
svc_0, svc_1, svc_2, svc_3
) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.
2.1.9.9. SSH configuration 复制链接链接已复制到粘贴板!
- If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):
mkdir -p /opt/hds/ssh ssh-keygen -f /opt/hds/ssh/hnaskey
$ mkdir -p /opt/hds/ssh $ ssh-keygen -f /opt/hds/ssh/hnaskey
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Change the owner of the key to
cinder
(or the user the volume service will be run):chown -R cinder.cinder /opt/hds/ssh
# chown -R cinder.cinder /opt/hds/ssh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the directory "ssh_keys" in the SMU server:
ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the public key to the "ssh_keys" directory:
scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
$ scp /opt/hds/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Access the SMU server:
ssh [manager|supervisor]@<smu-ip>
$ ssh [manager|supervisor]@<smu-ip>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the command to register the SSH keys:
ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
$ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the communication with HNAS in the Block Storage host:
ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
$ ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
<cluster_admin_ip0>
is "localhost" for single node deployments. This should return a list of available file systems on HNAS.
2.1.9.10. Editing the XML config file: 复制链接链接已复制到粘贴板!
- Set the "username".
- Enable SSH adding the line
"<ssh_enabled> True</ssh_enabled>"
under"<config>"
section. - Set the private key path:
"<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>"
under"<config>"
section. - If the HNAS is in a multi-cluster configuration set
"<cluster_admin_ip0>"
to the cluster node admin IP. In a single node HNAS, leave it empty. - Restart cinder services.
2.1.9.11. Manage and unmanage 复制链接链接已复制到粘贴板!
- Under the tab System -> Volumes choose the option
- Fill the fields Identifier, Host and Volume Type with volume information to be managed:
- Under the tab System -> Volumes choose the option
- Fill the fields Identifier, Host, Volume Name and Volume Type with volume information to be managed:
cinder --os-volume-api-version 2 manage [--source-name <source-name>][--id-type <id-type>] [--name <name>][--description <description>][--volume-type <volume-type>] [--availability-zone <availability-zone>][--metadata [<key=value> [<key=value> ...]]][--bootable] <host> [<key=value> [<key=value> ...]]
$ cinder --os-volume-api-version 2 manage [--source-name <source-name>][--id-type <id-type>] [--name <name>][--description <description>][--volume-type <volume-type>] [--availability-zone <availability-zone>][--metadata [<key=value> [<key=value> ...]]][--bootable] <host> [<key=value> [<key=value> ...]]
cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <172.24.44.34:/silver/volume-test> <myhost@hnas-nfs#test_silver>
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <172.24.44.34:/silver/volume-test> <myhost@hnas-nfs#test_silver>
cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <filesystem-test/volume-test> <myhost@hnas-iscsi#test_silver>
$ cinder --os-volume-api-version 2 manage --name <volume-test> --volume-type <silver> --source-name <filesystem-test/volume-test> <myhost@hnas-iscsi#test_silver>
- Under the tab [ System -> Volumes ] choose a volume
- On the volume options, choose
- Check the data and confirm.
cinder --os-volume-api-version 2 unmanage <volume>
$ cinder --os-volume-api-version 2 unmanage <volume>
cinder --os-volume-api-version 2 unmanage <voltest>
$ cinder --os-volume-api-version 2 unmanage <voltest>
2.1.9.12. Additional notes 复制链接链接已复制到粘贴板!
- The
get_volume_stats()
function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels. - After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.
- On Red Hat, if the system is configured to use SELinux, you need to set
"virt_use_nfs = on"
for NFS driver work properly.setsebool -P virt_use_nfs on
# setsebool -P virt_use_nfs on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - It is not possible to manage a volume if there is a slash ('/') or a colon (':') on the volume name.
2.1.10. Hitachi storage volume driver 复制链接链接已复制到粘贴板!
2.1.10.1. System requirements 复制链接链接已复制到粘贴板!
- Hitachi Virtual Storage Platform G1000 (VSP G1000)
- Hitachi Virtual Storage Platform (VSP)
- Hitachi Unified Storage VM (HUS VM)
- Hitachi Unified Storage 100 Family (HUS 100 Family)
- RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
- Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
/usr/stonavm.
- Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
- (Mandatory) ShadowImage in-system replication for HUS 100 Family
- (Optional) Copy-on-Write Snapshot for HUS 100 Family
2.1.10.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach and detach volumes.
- Create, list and delete volume snapshots.
- Create a volume from a snapshot.
- Copy a volume to an image.
- Copy an image to a volume.
- Clone a volume.
- Extend a volume.
- Get volume statistics.
2.1.10.3. Configuration 复制链接链接已复制到粘贴板!
Set up Hitachi storage 复制链接链接已复制到粘贴板!
- Create a Dynamic Provisioning pool.
- Connect the ports at the storage to the Controller node and Compute nodes.
- For VSP G1000/VSP/HUS VM, set "port security" to "enable" for the ports at the storage.
- For HUS 100 Family, set "Host Group security"/"iSCSI target security" to "ON" for the ports at the storage.
- For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the Controller node and each Compute node. Then register a WWN (initiator IQN) for each of the Controller node and Compute nodes.
- For VSP G1000/VSP/HUS VM, perform the following:
- Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
- Create a command device (In-Band), and set user authentication to ON.
- Register the created command device to the host group for the Controller node.
- To use the Thin Image function, create a pool for Thin Image.
- For HUS 100 Family, perform the following:
- Use the command auunitaddauto to register the unit name and controller of the storage device to HSNM2.
- When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor 复制链接链接已复制到粘贴板!
/opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1 dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION reboot
# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Set up Hitachi storage volume driver 复制链接链接已复制到粘贴板!
- Create directory.
mkdir /var/lock/hbsd chown cinder:cinder /var/lock/hbsd
# mkdir /var/lock/hbsd # chown cinder:cinder /var/lock/hbsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create "volume type" and "volume key".This example shows that HUS100_SAMPLE is created as "volume type" and hus100_backend is registered as "volume key".
cinder type-create HUS100_SAMPLE cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
$ cinder type-create HUS100_SAMPLE $ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify any identical "volume type" name and "volume key".To confirm the created "volume type", execute the following command:cinder extra-specs-list
$ cinder extra-specs-list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit
/etc/cinder/cinder.conf
as follows.If you use Fibre Channel:volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use iSCSI:volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Also, setvolume_backend_name
created by cinder type-keyvolume_backend_name = hus100_backend
volume_backend_name = hus100_backend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This table shows configuration options for Hitachi storage volume driver.Expand Table 2.8. Description of Hitachi storage volume driver configuration options Configuration option = Default value Description [DEFAULT] hitachi_add_chap_user
= False(BoolOpt) Add CHAP user hitachi_async_copy_check_interval
= 10(IntOpt) Interval to check copy asynchronously hitachi_auth_method
= None(StrOpt) iSCSI authentication method hitachi_auth_password
= HBSD-CHAP-password(StrOpt) iSCSI authentication password hitachi_auth_user
= HBSD-CHAP-user(StrOpt) iSCSI authentication username hitachi_copy_check_interval
= 3(IntOpt) Interval to check copy hitachi_copy_speed
= 3(IntOpt) Copy speed of storage system hitachi_default_copy_method
= FULL(StrOpt) Default copy method of storage system hitachi_group_range
= None(StrOpt) Range of group number hitachi_group_request
= False(BoolOpt) Request for creating HostGroup or iSCSI Target hitachi_horcm_add_conf
= True(BoolOpt) Add to HORCM configuration hitachi_horcm_numbers
= 200,201(StrOpt) Instance numbers for HORCM hitachi_horcm_password
= None(StrOpt) Password of storage system for HORCM hitachi_horcm_resource_lock_timeout
= 600(IntOpt) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200. hitachi_horcm_user
= None(StrOpt) Username of storage system for HORCM hitachi_ldev_range
= None(StrOpt) Range of logical device of storage system hitachi_pool_id
= None(IntOpt) Pool ID of storage system hitachi_serial_number
= None(StrOpt) Serial number of storage system hitachi_target_ports
= None(StrOpt) Control port names for HostGroup or iSCSI Target hitachi_thin_pool_id
= None(IntOpt) Thin pool ID of storage system hitachi_unit_name
= None(StrOpt) Name of an array unit hitachi_zoning_request
= False(BoolOpt) Request for FC Zone creating HostGroup - Restart Block Storage service.When the startup is done, "MSGID0003-I: The storage backend can be used." is output into
/var/log/cinder/volume.log
as follows.2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi. hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None] MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.11. HPE 3PAR Fibre Channel and iSCSI drivers 复制链接链接已复制到粘贴板!
HPE3PARFCDriver
and HPE3PARISCSIDriver
drivers, which are based on the Block Storage service (Cinder) plug-in architecture, run volume operations by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH connections. The HTTP and HTTPS communications use hp3parclient, which is part of the Python standard library.
2.1.11.1. System requirements 复制链接链接已复制到粘贴板!
- HPE 3PAR Operating System software version 3.1.3 MU1 or higher.
- Deduplication provisioning requires SSD disks and HPE 3PAR Operating System software version 3.2.1 MU1 or higher.
- Enabling Flash Cache Policy requires the following:
- Array must contain SSD disks.
- HPE 3PAR Operating System software version 3.2.1 MU2 or higher.
- python-3parclient version 4.2.0 or newer.
- Array must have the Adaptive Flash Cache license installed.
- Flash Cache must be enabled on the array with the CLI command createflashcache SIZE, where SIZE must be in 16 GB increments. For example, createflashcache 128g will create 128 GB of Flash Cache for each node pair in the array.
- The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may apply to the volume migrate, retype, and manage commands.
- The Virtual Copy License is required to support any feature that involves volume snapshots. This applies to the volume snapshot-* commands.
- HPE 3PAR drivers will now check the licenses installed on the array and disable driver capabilities based on available licenses. This will apply to thin provisioning, QoS support and volume replication.
- HPE 3PAR Web Services API Server must be enabled and running
- One Common Provisioning Group (CPG)
- Additionally, you must install the python-3parclient version 4.2.0 or newer from the Python standard library on the system with the enabled Block Storage service volume drivers.
2.1.11.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
- Migrate a volume with back-end assistance.
- Retype a volume.
- Manage and unmanage a volume.
- Create, delete, update, snapshot, and clone consistency groups.
- Create and delete consistency group snapshots.
- Create a consistency group from a consistency group snapshot or another group.
cinder.api.contrib.types_extra_specs
volume type extra specs extension module:
hpe3par:snap_cpg
hpe3par:provisioning
hpe3par:persona
hpe3par:vvs
hpe3par:flash_cache
hpe3par:
. For information about how to set the key-value pairs and associate them with a volume type, run the following command:
cinder help type-key
$ cinder help type-key
hpe3par:cpg
- Defaults to thehpe3par_cpg
setting in thecinder.conf
file.hpe3par:snap_cpg
- Defaults to thehpe3par_snap
setting in thecinder.conf
file. Ifhpe3par_snap
is not set, it defaults to thehpe3par_cpg
setting.hpe3par:provisioning
- Defaults to thin provisioning, the valid values arethin
,full
, anddedup
.hpe3par:persona
- Defaults to the2 - Generic-ALUA
persona. The valid values are,1 - Generic
,2 - Generic-ALUA
,3 - Generic-legacy
,4 - HPUX-legacy
,5 - AIX-legacy
,6 - EGENERA
,7 - ONTAP-legacy
,8 - VMware
,9 - OpenVMS
,10 - HPUX
, and11 - WindowsServer
.hpe3par:flash_cache
- Defaults tofalse
, the valid values aretrue
andfalse
.
cinder.api.contrib.qos_specs_manage
qos specs extension module:
minBWS
maxBWS
minIOPS
maxIOPS
latency
priority
cinder help qos-create
$ cinder help qos-create
cinder help qos-key
$ cinder help qos-key
cinder help qos-associate
$ cinder help qos-associate
hpe3par:vvs
- The virtual volume set name that has been predefined by the Administrator with Quality of Service (QoS) rules associated to it. If you specify extra_specshpe3par:vvs
, the qos_specsminIOPS
,maxIOPS
,minBWS
, andmaxBWS
settings are ignored.minBWS
- The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue bandwidth rate has no minimum goal.maxBWS
- The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit.minIOPS
- The QoS I/O issue count minimum goal. If not set, the I/O issue count has no minimum goal.maxIOPS
- The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit.latency
- The latency goal in milliseconds.priority
- The priority of the QoS rule over other rules. If not set, the priority is normal, valid values are low, normal and high.
hpe3par:flash_cache
- The flash-cache policy, which can be turned on and off by setting the value totrue
orfalse
.
HP3PARFCDriver
and HP3PARISCSIDriver
are installed with the OpenStack software.
- Install the
hp3parclient
Python package on the OpenStack Block Storage system.pip install 'python-3parclient>=4.0,<5.0'
# pip install 'python-3parclient>=4.0,<5.0'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the HPE 3PAR Web Services API server is enabled and running on the HPE 3PAR storage system.
- Log onto the HP 3PAR storage system with administrator access.
ssh 3paradm@<HP 3PAR IP Address>
$ ssh 3paradm@<HP 3PAR IP Address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - View the current state of the Web Services API Server.
showwsapi
# showwsapi -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version- Enabled Active Enabled 8008 Enabled 8080 1.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the Web Services API Server is disabled, start it.
startwsapi
# startwsapi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If the HTTP or HTTPS state is disabled, enable one of them.or
setwsapi -http enable
# setwsapi -http enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow setwsapi -https enable
# setwsapi -https enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi –h command. - If you are not using an existing CPG, create a CPG on the HPE 3PAR storage system to be used as the default location for creating volumes.
- Make the following changes in the
/etc/cinder/cinder.conf
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can enable only one driver on each cinder instance unless you enable multiple back-end support.NoteYou can configure one or more iSCSI addresses by using thehpe3par_iscsi_ips
option. When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The IP address might include an IP port by using a colon (:
) to separate the address from port. If you do not define an IP port, the default port 3260 is used. Separate IP addresses with a comma (,
). Theiscsi_ip_address
/iscsi_port
options might be used as an alternative tohpe3par_iscsi_ips
for single port iSCSI configuration. - Save the changes to the
cinder.conf
file and restart thecinder-volume
service.
2.1.12. Huawei storage driver 复制链接链接已复制到粘贴板!
Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, expand, attach, and detach volumes.
- Create and delete a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Create a volume from a snapshot.
- Clone a volume.
Configure block storage nodes 复制链接链接已复制到粘贴板!
- Modify the
cinder.conf
configuration file and addvolume_driver
andcinder_huawei_conf_file items
.- Example for configuring a storage system:
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
volume_driver = cinder.volume.drivers.huawei.HuaweiVolumeDriver cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Example for configuring multiple storage systems:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- In
/etc/cinder
, create a driver configuration file. The driver configuration file name must be the same as thecinder_huawei_conf_file
item in thecinder_conf
configuration file. Configure product and protocol.
Product and Protocol indicate the storage system type and link type respectively. For the OceanStor 18000 series V100R001 storage systems, the driver configuration file is as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteNote for fibre channel driver configuration
- In the configuration files of OceanStor T series V200R002 and OceanStor V3 V300R002, parameter configurations are the same with the exception of the RestURL parameter. The following describes how to configure the RestURL parameter:
<RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
<RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- For a Fibre Channel driver, you do not need to configure an iSCSI target IP address. Delete the iSCSI configuration from the preceding examples.
<iSCSI> <DefaultTargetIP>x.x.x.x</DefaultTargetIP> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> </iSCSI>
<iSCSI> <DefaultTargetIP>x.x.x.x</DefaultTargetIP> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/> </iSCSI>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This table describes the Huawei storage driver configuration options:Expand Table 2.9. Huawei storage driver configuration options Property Type Default Description Product
Mandatory-Type of a storage product. Valid values areT
,TV3
, or18000
.Protocol
Mandatory -Type of a protocol. Valid values are iSCSI
orFC
.RestURL
Mandatory -Access address of the Rest port (required only for the 18000) UserName
Mandatory-User name of an administratorUserPassword
Mandatory-Password of an administratorLUNType
OptionalThinType of a created LUN. Valid values areThick
orThin
.StripUnitSize
Optional64Stripe depth of a created LUN. The value is expressed in KB.This flag is not valid for a thin LUN.WriteType
Optional1Cache write method. The method can be write back, write through, or Required write back. The default value is1
, indicating write back.MirrorSwitch
Optional1Cache mirroring policy. The default value is1
, indicating that a mirroring policy is used.Prefetch Type
Optional 3Cache prefetch strategy. The strategy can be constant prefetch, variable prefetch, or intelligent prefetch. Default value is3
, which indicates intelligent prefetch and is not required for the OceanStor 18000 series.Prefetch Value
Optional 0Cache prefetch value.LUNcopyWaitInterval
Optional 5After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.Timeout
Optional 432,000Timeout period for waiting LUN copy of an array to complete.StoragePool
Mandatory -Name of a storage pool that you want to use.DefaultTargetIP
Optional -Default IP address of the iSCSI port provided for compute nodes.Initiator Name
Optional -Name of a compute node initiator.Initiator TargetIP
Optional -IP address of the iSCSI port provided for compute nodes.OSType
Optional LinuxThe OS type for a compute node. HostIP
Optional -The IPs for compute nodes. Note for the configuration- You can configure one iSCSI target port for each or all compute nodes. The driver checks whether a target port IP address is configured for the current compute node. If not, select
DefaultTargetIP
. - Only one storage pool can be configured.
- For details about LUN configuration information, see the show lun general command in the command-line interface (CLI) documentation or run the help -c show lun general on the storage system CLI.
- After the driver is loaded, the storage system obtains any modification of the driver configuration file in real time and you do not need to restart the
cinder-volume
service.
- Restart the Cinder service.
2.1.13. IBM Storwize family and SVC volume driver 复制链接链接已复制到粘贴板!
2.1.13.1. Configure the Storwize family and SVC system 复制链接链接已复制到粘贴板!
Network configuration 复制链接链接已复制到粘贴板!
storwize_svc_multipath_enabled
flag is set to True in the Cinder configuration file, the driver uses all available WWPNs to attach the volume to the instance (details about the configuration flags appear in the next section). If the flag is not set, the driver uses the WWPN associated with the volume's preferred node (if available), otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the driver.
iSCSI CHAP authentication 复制链接链接已复制到粘贴板!
storwize_svc_iscsi_chap_enabled
is set to True
, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. OpenStack compute nodes use these secrets when creating iSCSI connections.
Configure storage pools 复制链接链接已复制到粘贴板!
storwize_svc_volpool_name
configuration flag. Details about the configuration flags and how to provide the flags to the driver appear in the next section.
Configure user authentication for the driver 复制链接链接已复制到粘贴板!
san_ip
flag, and the management port should be provided by the san_ssh_port
flag. By default, the port value is configured to be port 22 (SSH).
cinder-volume
management driver has SSH network access to the storage system.
san_login
and san_password
, respectively.
san_private_key
configuration flag.
Create a SSH key pair with OpenSSH 复制链接链接已复制到粘贴板!
ssh-keygen -t rsa
$ ssh-keygen -t rsa
key
and key.pub
. The key
file holds the private SSH key and key.pub
holds the public SSH key.
san_private_key
configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command line interface.
2.1.13.2. Configure the Storwize family and SVC driver 复制链接链接已复制到粘贴板!
Enable the Storwize family and SVC driver 复制链接链接已复制到粘贴板!
volume_driver
option in cinder.conf
as follows:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
volume_driver = cinder.volume.drivers.ibm.storwize_svc.StorwizeSVCDriver
Storwize family and SVC driver options in cinder.conf 复制链接链接已复制到粘贴板!
Flag name | Type | Default | Description | ||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
san_ip
|
Required
|
|
Management IP or host name
|
||||||||||||||||||||||||||||||||||||||||||||||
san_ssh_port
|
Optional
|
22
|
Management port
|
||||||||||||||||||||||||||||||||||||||||||||||
san_login
|
Required
|
|
Management login username
|
||||||||||||||||||||||||||||||||||||||||||||||
san_password
|
Required [a]
|
|
Management login password
|
||||||||||||||||||||||||||||||||||||||||||||||
san_private_key
|
Required [a]
|
|
Management login SSH private key
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_volpool_name
|
Required
|
|
Default pool name for volumes
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_rsize
|
Optional
|
2
|
Initial physical allocation (percentage) [b]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_warning
|
Optional
|
0 (disabled)
|
Space allocation warning threshold (percentage) [b]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_autoexpand
|
Optional
|
True
|
Enable or disable volume auto expand [c]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_grainsize
|
Optional
|
256
|
Volume grain size [b] in KB
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_compression
|
Optional
|
False
|
Enable or disable Real-time Compression [d]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_easytier
|
Optional
|
True
|
Enable or disable Easy Tier [e]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_iogrp
|
Optional
|
0
|
The I/O group in which to allocate vdisks
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_flashcopy_timeout
|
Optional
|
120
|
FlashCopy timeout threshold [f] (seconds)
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_connection_protocol
|
Optional
|
iSCSI
|
Connection protocol to use (currently supports 'iSCSI' or 'FC')
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_iscsi_chap_enabled
|
Optional
|
True
|
Configure CHAP authentication for iSCSI connections
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_multipath_enabled
|
Optional
|
False
|
Enable multipath for FC connections [g]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_multihost_enabled
|
Optional
|
True
|
Enable mapping vdisks to multiple hosts [h]
|
||||||||||||||||||||||||||||||||||||||||||||||
storwize_svc_vol_nofmtdisk
|
Optional
|
False
|
Enable or disable fast format [i]
|
||||||||||||||||||||||||||||||||||||||||||||||
[a]
The authentication requires either a password ( san_password ) or SSH private key (san_private_key ). One must be specified. If both are specified, the driver uses only the SSH private key.
[b]
The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1 , the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[c]
Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[d]
Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[e]
Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[f]
The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[g]
Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
[h]
This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[i]
Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command line interface mkvdisk command.
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
storwize_svc_allow_tenant_qos = False
|
(BoolOpt) Allow tenants to specify QOS on create |
storwize_svc_connection_protocol = iSCSI
|
(StrOpt) Connection protocol (iSCSI/FC) |
storwize_svc_flashcopy_timeout = 120
|
(IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared. |
storwize_svc_iscsi_chap_enabled = True
|
(BoolOpt) Configure CHAP authentication for iSCSI connections (Default: Enabled) |
storwize_svc_multihostmap_enabled = True
|
(BoolOpt) Allows vdisk to multi host mapping |
storwize_svc_multipath_enabled = False
|
(BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova) |
storwize_svc_npiv_compatibility_mode = True
|
(BoolOpt) Indicate whether svc driver is compatible for NPIV setup. If it is compatible, it will allow no wwpns being returned on get_conn_fc_wwpns during initialize_connection. It should always be set to True. It will be deprecated and removed in M release. |
storwize_svc_stretched_cluster_partner = None
|
(StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: "pool2" |
storwize_svc_vol_autoexpand = True
|
(BoolOpt) Storage system autoexpand parameter for volumes (True/False) |
storwize_svc_vol_compression = False
|
(BoolOpt) Storage system compression option for volumes |
storwize_svc_vol_easytier = True
|
(BoolOpt) Enable Easy Tier for volumes |
storwize_svc_vol_grainsize = 256
|
(IntOpt) Storage system grain size parameter for volumes (32/64/128/256) |
storwize_svc_vol_iogrp = 0
|
(IntOpt) The I/O group in which to allocate volumes |
storwize_svc_vol_rsize = 2
|
(IntOpt) Storage system space-efficiency parameter for volumes (percentage) |
storwize_svc_vol_warning = 0
|
(IntOpt) Storage system threshold for volume capacity warnings (percentage) |
storwize_svc_volpool_name = volpool
|
(StrOpt) Storage system storage pool for volumes |
Placement with volume types 复制链接链接已复制到粘贴板!
extra specs
of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities:
to indicate that the scheduler should use them. The following extra specs
are supported:
- capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in
lssystem
, an underscore, and the name of the pool (mdisk group). For example:capabilities:volume_back-end_name=myV7000_openstackpool
capabilities:volume_back-end_name=myV7000_openstackpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - capabilities:compression_support - Specify a back-end according to compression support. A value of
True
should be used to request a back-end that supports compression, and a value ofFalse
will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifyingTrue
does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:capabilities:compression_support='<is> True'
capabilities:compression_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - capabilities:easytier_support - Similar semantics as the
compression_support
key, but for specifying according to support of the Easy Tier feature. Example syntax:capabilities:easytier_support='<is> True'
capabilities:easytier_support='<is> True'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are
iSCSI
andFC
. Thisextra specs
value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> used in the previous examples.capabilities:storage_protocol='<in> FC'
capabilities:storage_protocol='<in> FC'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure per-volume creation options 复制链接链接已复制到粘贴板!
extra specs
keys are supported by the IBM Storwize/SVC driver:
- rsize
- warning
- autoexpand
- grainsize
- compression
- easytier
- multipath
- iogrp
rsize=2
or compression=False
.
Example: Volume types 复制链接链接已复制到粘贴板!
cinder type-create compressed cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
cinder create --display-name "compressed volume" --volume-type compressed 50
$ cinder create --display-name "compressed volume" --volume-type compressed 50
- performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
- resiliency levels (such as, allocating volumes in pools with different RAID levels)
- features (such as, enabling/disabling Real-time Compression)
QOS 复制链接链接已复制到粘贴板!
etc/cinder/cinder.conf
file and setting the storwize_svc_allow_tenant_qos
to True.
IOThrotting
parameter for storage volumes:
- Add the
qos:IOThrottling
key into a QOS specification and associate it with a volume type. - Add the
qos:IOThrottling
key into an extra specification with a volume type. - Add the
qos:IOThrottling
key to the storage volume metadata.
Migrate volumes 复制链接链接已复制到粘贴板!
extent_size
. If the pools have different values for extent_size
, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.
Extend volumes 复制链接链接已复制到粘贴板!
Snapshots and clones 复制链接链接已复制到粘贴板!
Volume retype 复制链接链接已复制到粘贴板!
- rsize
- warning
- autoexpand
- grainsize
- compression
- easytier
- iogrp
- nofmtdisk
rsize
, grainsize
or compression
properties, volume copies are asynchronously synchronized on the array.
iogrp
property, IBM Storwize/SVC firmware version 6.4.0 or later is required.
2.1.14. IBM XIV and DS8000 volume driver 复制链接链接已复制到粘贴板!
cinder.conf
, and use the following options to configure it.
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver
volume_driver = cinder.volume.drivers.xiv_ds8k.XIVDS8KDriver
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
san_clustername =
|
(StrOpt) Cluster name to use for creating volumes |
san_ip =
|
(StrOpt) IP address of SAN controller |
san_login = admin
|
(StrOpt) Username for SAN controller |
san_password =
|
(StrOpt) Password for SAN controller |
xiv_chap = disabled
|
(StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled) |
xiv_ds8k_connection_type = iscsi
|
(StrOpt) Connection type to the IBM Storage Array |
xiv_ds8k_proxy = xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
|
(StrOpt) Proxy driver that connects to the IBM Storage Array |
2.1.15. LVM 复制链接链接已复制到粘贴板!
cinder.conf
configuration file, and use the following options to configure for iSCSI transport:
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver iscsi_protocol = iscsi
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver iscsi_protocol = iser
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iser
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
lvm_conf_file = /etc/cinder/lvm.conf
|
(StrOpt) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify 'None' to not use a conf file even if one exists). |
lvm_mirrors = 0
|
(IntOpt) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space |
lvm_type = default
|
(StrOpt) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. |
volume_group = cinder-volumes
|
(StrOpt) Name for the VG that will contain exported volumes |
2.1.16. NetApp unified driver 复制链接链接已复制到粘贴板!
2.1.16.1. NetApp clustered Data ONTAP storage family 复制链接链接已复制到粘贴板!
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None
|
(StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_lun_space_reservation = enabled
|
(StrOpt) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2
|
(FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None
|
(StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
netapp_login
that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720
|
(IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_copyoffload_tool_path = None
|
(StrOpt) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None
|
(StrOpt) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None
|
(StrOpt) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
thres_avl_size_perc_start = 20
|
(IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60
|
(IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
netapp_login
that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client 复制链接链接已复制到粘贴板!
- The Image Service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image Service. Both FlexVols must be located within the same cluster.
- The source image from the Image Service has already been cached in an NFS image cache within a Block Storage backend. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.
- Set the
default_store
configuration option tofile
. - Set the
filesystem_store_datadir
configuration option to the path to the Image Service NFS export. - Set the
show_image_direct_url
configuration option toTrue
. - Set the
show_multiple_locations
configuration option toTrue
.ImportantIf configured without the proper policy settings, a non-admin user of the Image Service can replace active image data (that is, switch out a current image without other users knowing). See the OSSN announcement (recommended actions) for configuration information: https://wiki.openstack.org/wiki/OSSN/OSSN-0065 - Set the
filesystem_store_metadata_file
configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image Service, similar to:{ "share_location": "nfs://192.168.0.1/myGlanceExport", "mount_point": "/var/lib/glance/images", "type": "nfs" }
{ "share_location": "nfs://192.168.0.1/myGlanceExport", "mount_point": "/var/lib/glance/images", "type": "nfs" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Set the
netapp_copyoffload_tool_path
configuration option to the path to the NetApp Copy Offload binary. - Set the
glance_api_version
configuration option to2
.
- The storage system must have Data ONTAP v8.2 or greater installed.
- The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
- To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
netapp_copyoffload_tool_path
configuration option, visit the Utility Toolchest page at the NetApp Support portal (login is required).
Extra spec | Type | Description | |||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
netapp_raid_type
|
String |
Limit the candidate volume list based on one of the following raid types: raid4, raid_dp .
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp_disk_type
|
String |
Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
|
|||||||||||||||||||||||||||||||||||||||||||||||
netapp:qos_policy_group [a]
|
String | Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_mirrored
|
Boolean | Limit the candidate volume list to only the ones that are mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_unmirrored [b]
|
Boolean | Limit the candidate volume list to only the ones that are not mirrored on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_dedup
|
Boolean | Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nodedup [b]
|
Boolean | Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_compression
|
Boolean | Limit the candidate volume list to only the ones that have compression enabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_nocompression [b]
|
Boolean | Limit the candidate volume list to only the ones that have compression disabled on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thin_provisioned
|
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
netapp_thick_provisioned [b]
|
Boolean | Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. | |||||||||||||||||||||||||||||||||||||||||||||||
[a]
Note that this extra spec has a colon ( : ) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[b]
In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored ) with a value of true , use the corresponding positive-assertion extra spec (for example, netapp_mirrored ) with a value of false .
|
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2
|
(FloatOpt) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of "reserved_percentage" in the Mitaka release. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None
|
(StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720
|
(IntOpt) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None
|
(StrOpt) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None
|
(StrOpt) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
thres_avl_size_perc_start = 20
|
(IntOpt) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60
|
(IntOpt) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
2.1.16.3. NetApp E-Series storage family 复制链接链接已复制到粘贴板!
2.1.16.3.1. NetApp iSCSI configuration for E-Series 复制链接链接已复制到粘贴板!
- The
use_multipath_for_image_xfer
option should be set toTrue
in thecinder.conf
file within the driver-specific stanza (for example,[myDriver]
). - The
iscsi_use_multipath
option should be set toTrue
in thenova.conf
file within the[libvirt]
stanza.
volume_driver
, netapp_storage_family
and netapp_storage_protocol
options in cinder.conf
as follows:
netapp_storage_family
with eseries
.
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_controller_ips = None
|
(StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. |
netapp_enable_multiattach = False
|
(BoolOpt) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host. |
netapp_host_type = None
|
(StrOpt) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None
|
(StrOpt) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None
|
(StrOpt) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None
|
(StrOpt) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+)
|
(StrOpt) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_sa_password = None
|
(StrOpt) Password for the NetApp E-Series storage array. |
netapp_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None
|
(IntOpt) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_storage_family = ontap_cluster
|
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_transport_type = http
|
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. |
netapp_webservice_path = /devmgr/v2
|
(StrOpt) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. |
2.1.16.4.1. Upgraded NetApp drivers 复制链接链接已复制到粘贴板!
Driver upgrade configuration 复制链接链接已复制到粘贴板!
- NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NetApp unified driver configuration.volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = iscsi
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NetApp unified driver configuration.volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NetApp unified driver configurationvolume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = iscsi
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NetApp unified driver configurationvolume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = nfs
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_7mode netapp_storage_protocol = nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.16.4.2. Deprecated NetApp drivers 复制链接链接已复制到粘贴板!
- NetApp iSCSI driver for clustered Data ONTAP.
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp NFS driver for clustered Data ONTAP.
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.17. NFS driver 复制链接链接已复制到粘贴板!
2.1.17.1. How the NFS driver works 复制链接链接已复制到粘贴板!
/var/lib/nova/instances
directory.
2.1.17.2. Enable the NFS driver and related options 复制链接链接已复制到粘贴板!
volume_driver
in cinder.conf
:
volume_driver=cinder.volume.drivers.nfs.NfsDriver
volume_driver=cinder.volume.drivers.nfs.NfsDriver
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nfs_mount_attempts = 3
|
(IntOpt) The number of attempts to mount nfs shares before raising an error. At least one attempt will be made to mount an nfs share, regardless of the value specified. |
nfs_mount_options = None
|
(StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details. |
nfs_mount_point_base = $state_path/mnt
|
(StrOpt) Base dir containing mount points for nfs shares. |
nfs_oversub_ratio = 1.0
|
(FloatOpt) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. Note that this option is deprecated in favor of "max_oversubscription_ratio" and will be removed in the Mitaka release. |
nfs_shares_config = /etc/cinder/nfs_shares
|
(StrOpt) File with the list of available nfs shares |
nfs_sparsed_volumes = True
|
(BoolOpt) Create volumes as sparsed files which take no space.If set to False volume is created as regular file.In such case volume creation takes a lot of time. |
nfs_used_ratio = 0.95
|
(FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. Note that this option is deprecated in favor of "reserved_percentage" and will be removed in the Mitaka release. |
nfs_mount_options
configuration option contains a request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config
configuration option, the mount will be attempted as requested with no subsequent attempts.
2.1.17.3. How to use the NFS driver 复制链接链接已复制到粘贴板!
- Access to one or more NFS servers. Creating an NFS server is outside the scope of this document. This example assumes access to the following NFS servers and mount points:
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough. - Add your list of NFS servers to the file you specified with the
nfs_shares_config
option. For example, if the value of this option was set to/etc/cinder/shares.txt
, then:cat /etc/cinder/shares.txt 192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
# cat /etc/cinder/shares.txt 192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Comments are allowed in this file. They begin with a#
. - Configure the
nfs_mount_point_base
option. This is a directory wherecinder-volume
mounts all NFS shares stored inshares.txt
. For this example,/var/lib/cinder/nfs
is used. You can, of course, use the default value of$state_path/mnt
. - Start the
cinder-volume
service./var/lib/cinder/nfs
should now contain a directory for each NFS share specified inshares.txt
. The name of each directory is a hashed name:ls /var/lib/cinder/nfs/
# ls /var/lib/cinder/nfs/ ... 46c5db75dc3a3a50a10bfd1a456a9f3f ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can now create volumes as you normally would:
nova volume-create --display-name myvol 5 ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
$ nova volume-create --display-name myvol 5 # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f volume-a8862558-e6d6-4648-b5df-bb84f31c8935
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
NFS driver notes 复制链接链接已复制到粘贴板!
cinder-volume
manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have onecinder-volume
service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than onecinder-volume
service is needed as well as potentially more than one NFS server.- Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Test accordingly.
- Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.NoteRegular IO flushing and syncing still stands.
2.1.18. SolidFire 复制链接链接已复制到粘贴板!
cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver san_ip = 172.17.1.182 # the address of your MVIP san_login = sfadmin # your cluster admin login san_password = sfpassword # your cluster admin password sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182 # the address of your MVIP
san_login = sfadmin # your cluster admin login
san_password = sfpassword # your cluster admin password
sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
$cinder-volume-service-hostname-$tenant-id
on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume
service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword "hostname" in sf_account_prefix.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sf_account_prefix = None
|
(StrOpt) Create SolidFire accounts with this prefix. Any string can be used here, but the string "hostname" is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix. |
sf_allow_template_caching = True
|
(BoolOpt) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls. |
sf_allow_tenant_qos = False
|
(BoolOpt) Allow tenants to specify QOS on create |
sf_api_port = 443
|
(IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port. |
sf_emulate_512 = True
|
(BoolOpt) Set 512 byte emulation on volume creation; |
sf_enable_volume_mapping = True
|
(BoolOpt) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False. |
sf_svip = None
|
(StrOpt) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. |
sf_template_account_name = openstack-vtemplate
|
(StrOpt) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist). |
2.1.19. Tintri 复制链接链接已复制到粘贴板!
- Edit the
etc/cinder/cinder.conf
file and set the cinder.volume.drivers.tintri options:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
etc/nova/nova.conf
file, and set thenfs_mount_options
:nfs_mount_options=vers=3
nfs_mount_options=vers=3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
/etc/cinder/nfs_shares
file, and add the Tintri VMstore mount points associated with the configured VMstore management IP in thecinder.conf
file:{vmstore_data_ip}:/tintri/{submount1} {vmstore_data_ip}:/tintri/{submount2}
{vmstore_data_ip}:/tintri/{submount1} {vmstore_data_ip}:/tintri/{submount2}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
tintri_api_version = v310
|
(StrOpt) API version for the storage system |
tintri_server_hostname = None
|
(StrOpt) The hostname (or IP address) for the storage system |
tintri_server_password = None
|
(StrOpt) Password for the storage system |
tintri_server_username = None
|
(StrOpt) User name for the storage system |
2.1.20. Violin Memory 7000 Series FSP volume driver 复制链接链接已复制到粘贴板!
2.1.20.1. System requirements 复制链接链接已复制到粘贴板!
- Violin 7300/7700 series FSP with:
- Concerto OS version 7.5.3 or later
- Fibre channel host interfaces
- The Violin block storage driver: This driver implements the block storage API calls. The driver is included with the OpenStack Liberty release.
- The vmemclient library: This is the Violin Array Communications library to the Flash Storage Platform through a REST-like interface. The client can be installed using the python pip installer tool. Further information on vmemclient can be found on PyPI.
pip install vmemclient
pip install vmemclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.20.2. Supported operations 复制链接链接已复制到粘贴板!
- Create, delete, attach, and detach volumes.
- Create, list, and delete volume snapshots.
- Create a volume from a snapshot.
- Copy an image to a volume.
- Copy a volume to an image.
- Clone a volume.
- Extend a volume.
noteListed operations are supported for thick, thin, and dedup luns, with the exception of cloning. Cloning operations are supported only on thick luns.
2.1.20.3. Driver configuration 复制链接链接已复制到粘贴板!
2.1.20.3.1. Fibre channel configuration 复制链接链接已复制到粘贴板!
cinder.conf
configuration file, replacing the variables using the guide in the following section:
2.1.20.3.2. Configuration parameters 复制链接链接已复制到粘贴板!
- VMEM_CAPABILITIES
- User defined capabilities, a JSON formatted string specifying key-value pairs (string value). The ones particularly supported are
dedup
andthin
. Only these two capabilities are listed here incinder.conf
file, indicating this backend be selected for creating luns which have a volume type associated with them that havededup
orthin
extra_specs specified. For example, if the FSP is configured to support dedup luns, set the associated driver capabilities to: {"dedup":"True","thin":"True"}. - VMEM_MGMT_IP
- External IP address or host name of the Violin 7300 Memory Gateway. This can be an IP address or host name.
- VMEM_USER_NAME
- Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller. This user must have administrative rights on the array or controller.
- VMEM_PASSWORD
- Log-in user's password.