Custom Block Storage Back End Deployment Guide
A Guide to Deploying a Custom Block Storage Back End in a Red Hat OpenStack Platform Overcloud
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction
The Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete RHOSP environment. It is based primarily on the upstream TripleO (OpenStack-on-OpenStack) project. The primary objective of director is to fully orchestrate a functional, Enterprise-grade RHOSP deployment with minimal manual configuration. It helps to address many of the issues inherent in manually configuring individual OpenStack components.
The end-result RHOSP deployment provided by director is called the overcloud. The overcloud contains all the components that provide services to end users, including Block Storage. This document provides guidance on how to deploy custom back ends to the Block Storage service of the overcloud.
This document presumes existing knowledge of concepts relating to manual Block Storage configuration. In a test deployment of OpenStack (for example, through Packstack), configuring this service involves editing the /etc/cinder/cinder.conf
of its host node . Most of the Block Storage settings in that file are documented in better detail elsewhere; this document describes how to apply those same settings to the overcloud to attach a custom back end.
This procedure has been tested successfully in limited use cases. Ensure that you test your planned deployment on a non-production environment first. If you have any questions, contact Red Hat support.
1.1. Custom Back Ends
For the purposes of this document, a custom back end is defined as a storage server, appliance or configuration that has yet to be integrated fully into the Red Hat OpenStack Platform director. Some supported Block Storage back ends are already integrated into the director; this means that pre-configured director files are already provided. An integrated back end can be configured and deployed to the overcloud through these files. Examples of integrated back ends include Red Hat Ceph and single-back end configurations of Dell EMC PS Series, Dell Storage Center, and NetApp appliances.
Further, some storage appliances already integrated into director only support a single-instance back end. For example, the pre-configured director files for Dell Storage Center only support the deployment of a single back end. Deploying multiple back end instances of this appliance requires a custom configuration, as demonstrated in this document.
Although you can manually configure the Block Storage service by directly editing the /etc/cinder/cinder.conf
file of its node, director overwrites your configuration when you run the openstack overcloud deploy
command. For this reason, the recommended method for deploying a Block Storage back end is through the director, this ensures that your settings persist through overcloud deployments and updates.
If a back end configuration is already fully integrated, you can edit and invoke its packaged environment files. With custom back ends, however, you must write your own environment file. For more information, see Including Environment Files in Overcloud Creation in the Director Installation and Usage guide. This document includes an annotated sample that you can edit for your own deployment: /home/stack/templates/custom-env.yaml. This sample file is suitable for configuring the Block Storage service to use two NetApp back ends.
1.2. Requirements
Prerequisites
- In addition to prior knowledge about manually configuring Block Storage and the back end you want to deploy.
- If you are using third-party back end appliances, then they must already be properly configured as storage repositories.
- The overcloud has already been deployed through director. See the Director Installation and Usage guide.
-
You have the username and password of an account with elevated privileges. You can use the same account that you created to deploy the overcloud. See Creating the stack user, in the Director Installation and Usage guide. The
stack
user is created for this purpose. -
You have already mapped out the resulting configuration you want for the Block Storage back end in
/etc/cinder/cinder.conf
.
Chapter 2. Process Description
The Block Storage service’s settings are stored in /etc/cinder/cinder.conf
; these settings include back end definitions. Most third-party back ends usable with (or even supported by) the Block Storage service provide setup instructions that involve editing /etc/cinder/cinder.conf
settings. As mentioned in Chapter 1, Introduction, doing so will configure the Block Storage service; however, those settings will get overwritten in future Overcloud updates.
Regardless, any documentation relating to manual configuration through /etc/cinder/cinder.conf
is still useful for Overcloud deployments. The Director, after all, applies the same configuration to /etc/cinder/cinder.conf
, albeit through heat. As such, planning the back end configuration requires that you:
- Thoroughly plan the Block Storage back end configuration you want, and
-
Map out the resulting
/etc/cinder/cinder.conf
file for this configuration.
Once you map out the resulting /etc/cinder/cinder.conf
file, create the environment file that will orchestrate the back end settings. environment file describes this step in greater detail, using the sample file /home/stack/templates/custom-env.yaml
. Having the environment file handy will help ensure that the back end settings persist through future Overcloud updates.
Chapter 3. Create the Environment File
The environment file contains the settings for each back end that you want to define, and other relevant settings. For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide.
The following sample environment file defines two NetApp back ends: netapp1 and netapp2:
/home/stack/templates/custom-env.yaml
parameter_defaults: # 1 CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: false NovaEnableRbdBackend: false GlanceBackend: file # 2 ControllerExtraConfig: # 3 cinder::config::cinder_config: netapp1/volume_driver: # 4 value: cinder.volume.drivers.netapp.common.NetAppDriver netapp1/netapp_storage_family: value: ontap_7mode netapp1/netapp_storage_protocol: value: iscsi netapp1/netapp_server_hostname: value: 10.35.64.11 netapp1/netapp_server_port: value: 80 netapp1/netapp_login: value: root netapp1/netapp_password: value: p@$$w0rd netapp1/volume_backend_name: value: netapp1 netapp2/volume_driver: # 5 value: cinder.volume.drivers.netapp.common.NetAppDriver # 6 netapp2/netapp_storage_family: value: ontap_7mode netapp2/netapp_storage_protocol: value: iscsi netapp2/netapp_server_hostname: value: 10.35.64.11 netapp2/netapp_server_port: value: 80 netapp2/netapp_login: value: root netapp2/netapp_password: value: p@$$w0rd netapp2/volume_backend_name: value: netapp2 cinder_user_enabled_backends: ['netapp1','netapp2'] # 7
- 1
- The following parameters are set to
false
, and thereby disable other back end types:-
CinderEnableIscsiBackend
: other iSCSI back ends. -
CinderEnableRbdBackend
: Red Hat Ceph. -
CinderEnableNfsBackend
: NFS. -
NovaEnableRbdBackend
: ephemeral Red Hat Ceph storage.
-
- 2
- The GlanceBackend parameter sets what the Image service should use to store images. The following values are supported:
-
file
: store images on/var/lib/glance/images
on each Controller node. -
swift
: use the Object Storage service for image storage. -
cinder
: use the Block Storage service for image storage.
-
- 3
ControllerExtraConfig
defines custom settings that will be applied to all Controller nodes. Thecinder::config::cinder_config
class means the settings should be applied to the Block Storage (cinder
) service. This, in turn, means that the back end settings will ultimately end in the/etc/cinder/cinder.conf
file of each Controller node.- 4
- The
netapp1/volume_driver
andnetapp2/volume_driver
settings follow the section/setting syntax. With the Block Storage service, each back end is defined in its own section in/etc/cinder/cinder.conf
. Each setting that uses thenetapp1
prefix will be defined in a new[netapp1]
back end section. - 5
- Likewise,
netapp2
settings are defined in a separate[netapp2]
section. - 6
- The
value
prefix configures the preceding setting. - 7
- The
cinder_user_enabled_backends
class sets and enables custom back ends. As the name implies, this class should only be used for user-enabled back ends; specifically, those defined in thecinder::config::cinder_config
class.Do not use
cinder_user_enabled_backends
to list back ends you can enable natively through Director. These include Red Hat Ceph, NFS, and single back ends for supported NetApp or Dell appliances. For example, if you are also enabling a Red Hat Ceph back end, do not list it incinder_user_enabled_backends
; rather, enable it usingCinderEnableRbdBackend: true
.
For more information on defining a Red Hat Ceph back end for OpenStack Block Storage, see Deploying an Overcloud with Containerized Red Hat Ceph.
Deploy the Configured Back Ends describes how to use the environment file /home/stack/templates/custom-env.yaml to orchestrate the custom back end’s deployment. To see the resulting /etc/cinder/cinder.conf
settings from /home/stack/templates/custom-env.yaml
, see Section A.2, “Resulting Configuration from Sample Environment File”.
Chapter 4. Deploy the Configured Back Ends
When you have created the custom-env.yaml file in /home/stack/templates/
, log in as the stack
user. Then, deploy the custom back end configuration by running:
$ openstack overcloud deploy --templates -e /home/stack/templates/custom-env.yaml
If you passed any extra environment files when you created the overcloud, pass them again here by using the -e
option to avoid making undesired changes to the overcloud. For more information, see Modifying the Overcloud Environment in the Director Installation and Usage guide.
Test the back end after director orchestration is complete. See Chapter 5, Test the Configured Back End.
Chapter 5. Test the Configured Back End
After you deploy the back ends to the overcloud, test if you can successfully create volumes on them. You must load the necessary environment variables first. The variables are defined in /home/stack/overcloudrc
by default.
-
To load the variables, run the following command as the
stack
user:
$ source /home/stack/overcloudrc
For more information, see Accessing the overcloud in the Director Installation and Usage guide.
-
Create a volume type for each back end. Log in to the Controller node of the overcloud as the
stack
user and run the following command:
$ cinder type-create backend1 $ cinder type-create backend2
These commands create the volume types backend1
and backend2
, one for each back end defined through the cinder::config::cinder_config
class of envfile
.
-
Map each volume type to the
volume_backend_name
of a back end enabled through thecinder_user_enabled_backends
class ofenvfile
. The following commands map the volume typebackend1
tonetapp1
andbackend2
tonetapp2
:
$ cinder type-key backend1 set volume_backend_name=netapp1 $ cinder type-key backend2 set volume_backend_name=netapp2
-
You can now test each back end. Create a 1GB volume named
netapp_volume_1
on thenetapp1
back end by invoking thebackend1
volume type:
$ cinder create --volume-type backend1 --display_name netappvolume_1 1
-
Create a similar volume on the
netapp2
back end by invoking thebackend2
volume type:
$ cinder create --volume-type backend2 --display_name netappvolume_2 1
Appendix A. Appendix
A.1. The stack user
You can use the stack
user account to run commands that require elevated privileges such as deploying the back end or loading environment variables for accessing the overcloud. For more information about the stack
user, see Creating the stack user in the Director Installation and Usage guide.
A.2. Resulting Configuration from Sample Environment File
The environment file in Chapter 3, Create the Environment File, configures the Block Storage service to use two NetApp back ends. The following snippet displays the relevant settings:
enabled_backends = netapp1,netapp2 [netapp1] volume_backend_name=netapp_1 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@$$w0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11 [netapp2] volume_backend_name=netapp_2 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@$$w0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11