Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 17. Storage configuration
This chapter outlines several methods that you can use to configure storage options for your overcloud.
The overcloud uses local ephemeral storage and Logical Volume Manager (LVM) storage for the default storage options. Local ephemeral storage is supported in production environments but LVM storage is not supported.
17.1. Configuring NFS storage Link kopierenLink in die Zwischenablage kopiert!
You can configure the overcloud to use shared NFS storage.
17.1.1. Supported configurations and limitations Link kopierenLink in die Zwischenablage kopiert!
Supported NFS storage
- Red Hat recommends that you use a certified storage back end and driver. Red Hat does not recommend that you use NFS storage that comes from the generic NFS back end, because its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS.
Unsupported NFS configuration
RHOSP does not support the NetApp feature NAS secure, because it interferes with normal volume operations. Director disables the feature by default. Therefore, do not edit the following heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports NAS secure:
-
CinderNetappNasSecureFileOperations -
CinderNetappNasSecureFilePermissions -
CinderNasSecureFileOperations -
CinderNasSecureFilePermissions
-
Limitations when using NFS shares
- Instances that have a swap disk cannot be resized or rebuilt when the back end is an NFS share.
17.1.2. Configuring NFS storage Link kopierenLink in die Zwischenablage kopiert!
You can configure the overcloud to use shared NFS storage.
Procedure
-
Create an environment file to configure your NFS storage, for example,
nfs_storage.yaml. Add the following parameters to your new environment file to configure NFS storage:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not configure the
CinderNfsMountOptionsandGlanceNfsOptionsparameters, as their default values enable NFS mount options that are suitable for most Red Hat OpenStack Platform (RHOSP) environments. You can see the value of theGlanceNfsOptionsparameter in theenvironments/storage/glance-nfs.yamlfile. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.Add your NFS storage environment file to the stack with your other environment files and deploy the overcloud:
openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/nfs_storage.yaml
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/nfs_storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Configuring Ceph Storage Link kopierenLink in die Zwischenablage kopiert!
Use one of the following methods to integrate Red Hat Ceph Storage into a Red Hat OpenStack Platform overcloud.
- Creating an overcloud with its own Ceph Storage cluster
- You can create a Ceph Storage Cluster during the creation on the overcloud. Director creates a set of Ceph Storage nodes that use the Ceph OSD to store data. Director also installs the Ceph Monitor service on the overcloud Controller nodes. This means that if an organization creates an overcloud with three highly available Controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph.
- Integrating an existing Ceph Storage cluster into an overcloud
- If you have an existing Ceph Storage Cluster, you can integrate this cluster into a Red Hat OpenStack Platform overcloud during deployment. This means that you can manage and scale the cluster outside of the overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster.
17.3. Using an external Object Storage cluster Link kopierenLink in die Zwischenablage kopiert!
You can reuse an external OpenStack Object Storage (swift) cluster by disabling the default Object Storage service deployment on your Controller nodes. This disables both the proxy and storage services for Object Storage and configures haproxy and OpenStack Identify (keystone) to use the given external Object Storage endpoint.
You must manage user accounts on the external Object Storage (swift) cluster manually.
Prerequisites
-
You need the endpoint IP address of the external Object Storage cluster as well as the
authtokenpassword from the external Object Storageproxy-server.conffile. You can find this information by using theopenstack endpoint listcommand.
Procedure
Create a new file named
swift-external-params.yamlwith the following content:-
Replace
EXTERNAL.IP:PORTwith the IP address and port of the external proxy and Replace
AUTHTOKENwith theauthtokenpassword for the external proxy on theSwiftPasswordline.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
-
Save this file as
swift-external-params.yaml. Deploy the overcloud with the following external Object Storage service environment files, as well as any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \ -e swift-external-params.yaml
openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \ -e swift-external-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Configuring Ceph Object Store to use external Ceph Object Gateway Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).
For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.
Procedure
Add the following
parameter_defaultsto a custom environment file, for example,swift-external-params.yaml, and adjust the values to suit your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example code snippet contains parameter values that might differ from values that you use in your environment:
-
The default port where the remote RGW instance listens is
8080. The port might be different depending on how the external RGW is configured. -
The
swiftuser created in the overcloud uses the password defined by theSwiftPasswordparameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using thergw_keystone_admin_password.
-
The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDirector creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Log in to the undercloud as the
stackuser. Source the
overcloudrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the endpoints exist in the Identity service (keystone):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a test container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a configuration file to confirm that you can upload data to the container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the test container:
openstack container delete -r <testcontainer>
$ openstack container delete -r <testcontainer>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.5. Configuring cinder back end for the Image service Link kopierenLink in die Zwischenablage kopiert!
Use the GlanceBackend parameter to set the back end that the Image service uses to store images.
The default maximum number of volumes you can create for a project is 10.
Procedure
To configure
cinderas the Image service back end, add the following line to an environment file:parameter_defaults: GlanceBackend: cinder
parameter_defaults: GlanceBackend: cinderCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
cinderback end is enabled, the following parameters and values are set by default:cinder_store_auth_address = http://172.17.1.19:5000/v3 cinder_store_project_name = service cinder_store_user_name = glance cinder_store_password = ****secret****
cinder_store_auth_address = http://172.17.1.19:5000/v3 cinder_store_project_name = service cinder_store_user_name = glance cinder_store_password = ****secret****Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a custom user name, or any custom value for the
cinder_store_parameters, add theExtraConfigparameter toparameter_defaultsand include your custom values:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6. Configuring the maximum number of storage devices to attach to one instance Link kopierenLink in die Zwischenablage kopiert!
By default, you can attach an unlimited number of storage devices to a single instance. To limit the maximum number of devices, add the max_disk_devices_to_attach parameter to your Compute environment file. Use the following example to change the value of max_disk_devices_to_attach to "30":
parameter_defaults:
ComputeExtraConfig:
nova::config::nova_config:
compute/max_disk_devices_to_attach:
value: '30'
parameter_defaults:
ComputeExtraConfig:
nova::config::nova_config:
compute/max_disk_devices_to_attach:
value: '30'
Guidelines and considerations
- The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices.
-
Changing the
max_disk_devices_to_attachon a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you changemax_disk_devices_to_attachto 20, a request to rebuild instance A will fail. - During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
- The configured maximum is not enforced on shelved offloaded instances, as they have no Compute node.
- Attaching a large number of disk devices to instances can degrade performance on the instance. Tune the maximum number based on the boundaries of what your environment can support.
- Instances with machine type Q35 can attach a maximum of 500 disk devices.
17.7. Improving scalability with Image service caching Link kopierenLink in die Zwischenablage kopiert!
Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabledparameter totrue, which automatically sets theflavorvalue tokeystone+cachemanagementin theglance-api.confheat template:parameter_defaults: GlanceCacheEnabled: trueparameter_defaults: GlanceCacheEnabled: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include the environment file in the
openstack overcloud deploycommand when you redeploy the overcloud. Optional: Tune the
glance_cache_prunerto an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
17.8. Configuring third party storage Link kopierenLink in die Zwischenablage kopiert!
The following environment files are present in the core heat template collection /usr/share/openstack-tripleo-heat-templates.
- Dell EMC Storage Center
Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.- Dell EMC PS Series
Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.- NetApp Block Storage
Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml.