Chapter 17. Storage configuration
This chapter outlines several methods that you can use to configure storage options for your overcloud.
The overcloud uses local ephemeral storage and Logical Volume Manager (LVM) storage for the default storage options. Local ephemeral storage is supported in production environments but LVM storage is not supported.
17.1. Configuring NFS storage
You can configure the overcloud to use shared NFS storage.
17.1.1. Supported configurations and limitations
Supported NFS storage
- Red Hat recommends that you use a certified storage back end and driver. Red Hat does not recommend that you use NFS storage that comes from the generic NFS back end, because its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS.
Unsupported NFS configuration
RHOSP does not support the NetApp feature NAS secure, because it interferes with normal volume operations. Director disables the feature by default. Therefore, do not edit the following heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports NAS secure:
-
CinderNetappNasSecureFileOperations
-
CinderNetappNasSecureFilePermissions
-
CinderNasSecureFileOperations
-
CinderNasSecureFilePermissions
-
Limitations when using NFS shares
- Instances that have a swap disk cannot be resized or rebuilt when the back end is an NFS share.
17.1.2. Configuring NFS storage
You can configure the overcloud to use shared NFS storage.
Procedure
-
Create an environment file to configure your NFS storage, for example,
nfs_storage.yaml
. Add the following parameters to your new environment file to configure NFS storage:
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableNfsBackend: true GlanceBackend: file CinderNfsServers: 192.0.2.230:/cinder GlanceNfsEnabled: true GlanceNfsShare: 192.0.2.230:/glance
NoteDo not configure the
CinderNfsMountOptions
andGlanceNfsOptions
parameters, as their default values enable NFS mount options that are suitable for most Red Hat OpenStack Platform (RHOSP) environments. You can see the value of theGlanceNfsOptions
parameter in theenvironments/storage/glance-nfs.yaml
file. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.Add your NFS storage environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/nfs_storage.yaml
17.2. Configuring Ceph Storage
Use one of the following methods to integrate Red Hat Ceph Storage into a Red Hat OpenStack Platform overcloud.
- Creating an overcloud with its own Ceph Storage cluster
- You can create a Ceph Storage Cluster during the creation on the overcloud. Director creates a set of Ceph Storage nodes that use the Ceph OSD to store data. Director also installs the Ceph Monitor service on the overcloud Controller nodes. This means that if an organization creates an overcloud with three highly available Controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph.
- Integrating an existing Ceph Storage cluster into an overcloud
- If you have an existing Ceph Storage Cluster, you can integrate this cluster into a Red Hat OpenStack Platform overcloud during deployment. This means that you can manage and scale the cluster outside of the overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster.
17.3. Using an external Object Storage cluster
You can reuse an external OpenStack Object Storage (swift) cluster by disabling the default Object Storage service deployment on your Controller nodes. This disables both the proxy and storage services for Object Storage and configures haproxy and OpenStack Identify (keystone) to use the given external Object Storage endpoint.
You must manage user accounts on the external Object Storage (swift) cluster manually.
Prerequisites
-
You need the endpoint IP address of the external Object Storage cluster as well as the
authtoken
password from the external Object Storageproxy-server.conf
file. You can find this information by using theopenstack endpoint list
command.
Procedure
Create a new file named
swift-external-params.yaml
with the following content:-
Replace
EXTERNAL.IP:PORT
with the IP address and port of the external proxy and Replace
AUTHTOKEN
with theauthtoken
password for the external proxy on theSwiftPassword
line.parameter_defaults: ExternalPublicUrl: 'https://EXTERNAL.IP:PORT/v1/AUTH_%(tenant_id)s' ExternalInternalUrl: 'http://192.168.24.9:8080/v1/AUTH_%(tenant_id)s' ExternalAdminUrl: 'http://192.168.24.9:8080' ExternalSwiftUserTenant: 'service' SwiftPassword: AUTHTOKEN
-
Replace
-
Save this file as
swift-external-params.yaml
. Deploy the overcloud with the following external Object Storage service environment files, as well as any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \ -e swift-external-params.yaml
17.4. Configuring Ceph Object Store to use external Ceph Object Gateway
Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).
For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.
Procedure
Add the following
parameter_defaults
to a custom environment file, for example,swift-external-params.yaml
, and adjust the values to suit your deployment:parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'
NoteThe example code snippet contains parameter values that might differ from values that you use in your environment:
-
The default port where the remote RGW instance listens is
8080
. The port might be different depending on how the external RGW is configured. -
The
swift
user created in the overcloud uses the password defined by theSwiftPassword
parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using thergw_keystone_admin_password
.
-
The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = true
NoteDirector creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
Verification
-
Log in to the undercloud as the
stack
user. Source the
overcloudrc
file:$ source ~/stackrc
Verify that the endpoints exist in the Identity service (keystone):
$ openstack endpoint list --service object-store +---------+-----------+-------+-------+---------+-----------+---------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +---------+-----------+-------+-------+---------+-----------+---------------+ | 233b7ea32aaf40c1ad782c696128aa0e | regionOne | swift | object-store | True | admin | http://192.168.24.3:8080/v1/AUTH_%(project_id)s | | 4ccde35ac76444d7bb82c5816a97abd8 | regionOne | swift | object-store | True | public | https://192.168.24.2:13808/v1/AUTH_%(project_id)s | | b4ff283f445348639864f560aa2b2b41 | regionOne | swift | object-store | True | internal | http://192.168.24.3:8080/v1/AUTH_%(project_id)s | +---------+-----------+-------+-------+---------+-----------+---------------+
Create a test container:
$ openstack container create <testcontainer> +----------------+---------------+------------------------------------+ | account | container | x-trans-id | +----------------+---------------+------------------------------------+ | AUTH_2852da3cf2fc490081114c434d1fc157 | testcontainer | tx6f5253e710a2449b8ef7e-005f2d29e8 | +----------------+---------------+------------------------------------+
Create a configuration file to confirm that you can upload data to the container:
$ openstack object create testcontainer undercloud.conf +-----------------+---------------+----------------------------------+ | object | container | etag | +-----------------+---------------+----------------------------------+ | undercloud.conf | testcontainer | 09fcffe126cac1dbac7b89b8fd7a3e4b | +-----------------+---------------+----------------------------------+
Delete the test container:
$ openstack container delete -r <testcontainer>
17.5. Configuring cinder back end for the Image service
Use the GlanceBackend
parameter to set the back end that the Image service uses to store images.
The default maximum number of volumes you can create for a project is 10.
Procedure
To configure
cinder
as the Image service back end, add the following line to an environment file:parameter_defaults: GlanceBackend: cinder
If the
cinder
back end is enabled, the following parameters and values are set by default:cinder_store_auth_address = http://172.17.1.19:5000/v3 cinder_store_project_name = service cinder_store_user_name = glance cinder_store_password = ****secret****
To use a custom user name, or any custom value for the
cinder_store_
parameters, add theExtraConfig
parameter toparameter_defaults
and include your custom values:ExtraConfig: glance::config::api_config: glance_store/cinder_store_auth_address: value: "%{hiera('glance::api::authtoken::auth_url')}/v3" glance_store/cinder_store_user_name: value: <user-name> glance_store/cinder_store_password: value: "%{hiera('glance::api::authtoken::password')}" glance_store/cinder_store_project_name: value: "%{hiera('glance::api::authtoken::project_name')}"
17.6. Configuring the maximum number of storage devices to attach to one instance
By default, you can attach an unlimited number of storage devices to a single instance. To limit the maximum number of devices, add the max_disk_devices_to_attach
parameter to your Compute environment file. Use the following example to change the value of max_disk_devices_to_attach
to "30":
parameter_defaults: ComputeExtraConfig: nova::config::nova_config: compute/max_disk_devices_to_attach: value: '30'
Guidelines and considerations
- The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices.
-
Changing the
max_disk_devices_to_attach
on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you changemax_disk_devices_to_attach
to 20, a request to rebuild instance A will fail. - During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
- The configured maximum is not enforced on shelved offloaded instances, as they have no Compute node.
- Attaching a large number of disk devices to instances can degrade performance on the instance. Tune the maximum number based on the boundaries of what your environment can support.
- Instances with machine type Q35 can attach a maximum of 500 disk devices.
17.7. Improving scalability with Image service caching
Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabled
parameter totrue
, which automatically sets theflavor
value tokeystone+cachemanagement
in theglance-api.conf
heat template:parameter_defaults: GlanceCacheEnabled: true
-
Include the environment file in the
openstack overcloud deploy
command when you redeploy the overcloud. Optional: Tune the
glance_cache_pruner
to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'
Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
17.8. Configuring third party storage
The following environment files are present in the core heat template collection /usr/share/openstack-tripleo-heat-templates
.
- Dell EMC Storage Center
Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml
.- Dell EMC PS Series
Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml
.- NetApp Block Storage
Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml
.