Search

Chapter 18. Storage Configuration

download PDF

This chapter outlines several methods of configuring storage options for your Overcloud.

Important

The Overcloud uses local and LVM storage for the default storage options. However, these options are not supported for enterprise-level Overclouds. It is recommended to use one of the storage options in this chapter.

18.1. Configuring NFS Storage

This section describes configuring the Overcloud to use an NFS share. The installation and configuration process is based on the modification of an existing environment file in the core Heat template collection.

Note

Do not use NFS v3 for the Block Storage and Compute services.

The core heat template collection contains a set of environment files in /usr/share/openstack-tripleo-heat-templates/environments/. These environment templates help with custom configuration of some of the supported features in a director-created Overcloud. This includes an environment file to help configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml. Copy this file to the stack user’s template directory.

$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.

The environment file contains some parameters to help configure different storage options for OpenStack’s block and image storage components, cinder and glance. In this example, you will configure the Overcloud to use an NFS share. Modify the following parameters:

CinderEnableIscsiBackend
Enables the iSCSI backend. Set to false.
CinderEnableRbdBackend
Enables the Ceph Storage backend. Set to false.
CinderEnableNfsBackend
Enables the NFS backend. Set to true.
NovaEnableRbdBackend
Enables Ceph Storage for Nova ephemeral storage. Set to false.
GlanceBackend
Define the back end to use for Glance. Set to file to use file-based storage for images. The Overcloud will save these files in a mounted NFS share for Glance.
CinderNfsMountOptions
The NFS mount options for the volume storage.
CinderNfsServers
The NFS share to mount for volume storage. For example, 192.168.122.1:/export/cinder.
GlanceNfsEnabled
Enables Pacemaker to manage the share for image storage. If disabled, the Overcloud stores images in the Controller node’s file system. Set to true.
GlanceNfsShare
The NFS share to mount for image storage. For example, 192.168.122.1:/export/glance.
GlanceNfsOptions
The NFS mount options for the image storage.

The environment file’s options should look similar to the following:

parameter_defaults:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: false
  CinderEnableNfsBackend: true
  NovaEnableRbdBackend: false
  GlanceBackend: 'file'

  CinderNfsMountOptions: 'rw,sync'
  CinderNfsServers: '192.0.2.230:/cinder'

  GlanceNfsEnabled: true
  GlanceNfsShare: '192.0.2.230:/glance'
  GlanceNfsOptions: 'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
Important

Include the context=system_u:object_r:glance_var_lib_t:s0 in the GlanceNfsOptions parameter to allow glance access to the /var/lib directory. Without this SELinux content, glance will fail to write to the mount point.

These parameters are integrated as part of the heat template collection. Setting them as such creates two NFS mount points for cinder and glance to use.

Save this file for inclusion in the Overcloud creation.

18.2. Configuring Ceph Storage

The director provides two main methods for integrating Red Hat Ceph Storage into an Overcloud.

Creating an Overcloud with its own Ceph Storage Cluster
The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director install the Ceph Monitor service on the Overcloud’s Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph guide.
Integrating a Existing Ceph Storage into an Overcloud
If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster guide.

18.3. Using an External Object Storage Cluster

You can reuse an external Object Storage (swift) cluster by disabling the default Object Storage service deployment on the controller nodes. Doing so disables both the proxy and storage services for Object Storage and configures haproxy and keystone to use the given external Swift endpoint.

Note

User accounts on the external Object Storage (swift) cluster have to be managed by hand.

You need the endpoint IP address of the external Object Storage cluster as well as the authtoken password from the external Object Storage proxy-server.conf file. You can find this information by using the openstack endpoint list command.

To deploy director with an external Swift cluster:

  1. Create a new file named swift-external-params.yaml with the following content:

    • Replace EXTERNAL.IP:PORT with the IP address and port of the external proxy and
    • Replace AUTHTOKEN with the authtoken password for the external proxy on the SwiftPassword line.

      parameter_defaults:
        ExternalPublicUrl: 'https://EXTERNAL.IP:PORT/v1/AUTH_%(tenant_id)s'
        ExternalInternalUrl: 'http://192.168.24.9:8080/v1/AUTH_%(tenant_id)s'
        ExternalAdminUrl: 'http://192.168.24.9:8080'
        ExternalSwiftUserTenant: 'service'
        SwiftPassword: AUTHTOKEN
  2. Save this file as swift-external-params.yaml.
  3. Deploy the overcloud using these additional environment files.

    openstack overcloud deploy --templates \
    -e [your environment files]
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e swift-external-params.yaml

18.4. Configuring the Image Import Method and Shared Staging Area

The default settings for the OpenStack Image service (glance) are determined by the Heat templates used when OpenStack is installed. The Image service Heat template is tht/puppet/services/glance-api.yaml.

The interoperable image import allows two methods for image import:

  • web-download and
  • glance-direct.

The web-download method lets you import an image from a URL; the glance-direct method lets you import an image from a local volume.

18.4.1. Creating and Deploying the glance-settings.yaml File

You use an environment file to configure the import parameters. These parameters override the default values established in the Heat template. The example environment content provides parameters for the interoperable image import.

parameter_defaults:
  # Configure NFS backend
  GlanceBackend: file
  GlanceNfsEnabled: true
  GlanceNfsShare: 192.168.122.1:/export/glance

  # Enable glance-direct import method
  GlanceEnabledImportMethods: glance-direct,web-download

  # Configure NFS staging area (required for glance-direct import method)
  GlanceStagingNfsShare: 192.168.122.1:/export/glance-staging

The GlanceBackend, GlanceNfsEnabled, and GlanceNfsShare parameters are defined in the Storage Configuration section in the Advanced Overcloud Customization Guide.

Two new parameters for interoperable image import define the import method and a shared NFS staging area.

GlanceEnabledImportMethods
Defines the available import methods, web-download (default) and glance-direct. This line is only necessary if you wish to enable additional methods besides web-download.
GlanceStagingNfsShare
Configures the NFS staging area used by the glance-direct import method. This space can be shared amongst nodes in a high-availability cluster setup. Requires GlanceNfsEnabled be set to true.

To configure the settings:

  1. Create a new file called, for example, glance-settings.yaml. The contents of this file should be similar to the example above.
  2. Add the file to your OpenStack environment using the openstack overcloud deploy command:

    $ openstack overcloud deploy --templates -e glance-settings.yaml

    For additional information about using environment files, see the Including Environment Files in Overcloud Creation section in the Advanced Overcloud Customization Guide.

18.4.2. Controlling Image Web-Import Sources

You can limit the sources of Web-import image downloads by adding URI blacklists and whitelists to the optional glance-image-import.conf file.

You can whitelist or blacklist image source URIs at three levels:

  • scheme (allowed_schemes, disallowed_schemes)
  • host (allowed_hosts, disallowed_hosts)
  • port (allowed_ports, disallowed_ports)

If you specify both at any level, the whitelist is honored and the blacklist is ignored.

The Image service applies the following decision logic to validate image source URIs:

  1. The scheme is checked.

    1. Missing scheme: reject
    2. If there’s a whitelist, and the scheme is not in it: reject. Otherwise, skip C and continue on to 2.
    3. If there’s a blacklist, and the scheme is in it: reject.
  2. The host name is checked.

    1. Missing host name: reject
    2. If there’s a whitelist, and the host name is not in it: reject. Otherwise, skip C and continue on to 3.
    3. If there’s a blacklist, and the host name is in it: reject.
  3. If there’s a port in the URI, the port is checked.

    1. If there’s a whitelist, and the port is not in it: reject. Otherwise, skip B and continue on to 4.
    2. If there’s a black list, and the port is in it: reject.
  4. The URI is accepted as valid.

Note that if you allow a scheme, either by whitelisting it or by not blacklisting it, any URI that uses the default port for that scheme by not including a port in the URI is allowed. If it does include a port in the URI, the URI is validated according to the above rules.

18.4.2.1. Example

For instance, the default port for FTP is 21. Because ftp is a whitelisted scheme, this URL is allowed: ftp://example.org/some/resource But because 21 is not in the port whitelist, this URL to the same resource is rejected: ftp://example.org:21/some/resource

allowed_schemes = [http,https,ftp]
disallowed_schemes = []
allowed_hosts = []
disallowed_hosts = []
allowed_ports = [80,443]
disallowed_ports = []

[Including Environment Files in Overcloud Creation] section in the Advanced Overcloud Customization Guide.

18.4.2.2. Default Image Import Blacklist and Whitelist Settings

The glance-image-import.conf file is an optional file. Here are the default settings for these options:

  • allowed_schemes - [http, https]
  • disallowed_schemes - empty list
  • allowed_hosts - empty list
  • disallowed_hosts - empty list
  • allowed_ports - [80, 443]
  • disallowed_ports - empty list

Thus if you use the defaults, end users will only be able to access URIs using the http or https scheme. The only ports users will be able to specify are 80 and 443. (Users do not have to specify a port, but if they do, it must be either 80 or 443.)

You can find the glance-image-import.conf file in the etc/ subdirectory of the Image service source code tree. Make sure that you are looking in the correct branch for the OpenStack release you are working with.

18.4.3. Injecting Metadata on Image Import to Control Where VMs Launch

End users can add images in the Image service and use these images to launch VMs. These user-provided (non-admin) images should be launched on a specific set of compute nodes. The assignment of an instance to a compute node is controlled by image metadata properties.

The Image Property Injection plugin injects metadata properties to images on import. Specify the properties by editing the [image_import_opts] and [inject_metadata_properties] sections of the glance-image-import.conf file.

To enable the Image Property Injection plugin, add this line to the [image_import_opts] section:

[image_import_opts]
image_import_plugins = [inject_image_metadata]

To limit the metadata injection to images provided by a certain set of users, set the ignore_user_roles parameter. For instance, the following configuration injects one value for property1 and two values for property2 into images downloaded by by any non-admin user.

[DEFAULT]
[image_conversion]
[image_import_opts]
image_import_plugins = [inject_image_metadata]
[import_filtering_opts]
[inject_metadata_properties]
ignore_user_roles = admin
inject = PROPERTY1:value,PROPERTY2:value;another value

The parameter ignore_user_roles is a comma-separated list of Keystone roles that the plugin will ignore. In other words, if the user making the image import call has any of these roles, the plugin will not inject any properties into the image.

The parameter inject is a comma-separated list of properties and values that will be injected into the image record for the imported image. Each property and value should be quoted and separated by a colon (‘:’) as shown in the example above.

You can find the glance-image-import.conf file in the etc/ subdirectory of the Image service source code tree. Make sure that you are looking in the correct branch for the OpenStack release you are working with.

18.5. Configuring cinder back end for the Image service

The GlanceBackend parameter sets the back end that the Image service uses to store images. To configure cinder as the Image service back end, add the following to the environment file:

parameter_defaults:
  GlanceBackend: cinder

If the cinder back end is enabled, the following parameters and values are set by default:

cinder_store_auth_address = http://172.17.1.19:5000/v3
cinder_store_project_name = service
cinder_store_user_name = glance
cinder_store_password = ****secret****

To use a custom user name, or any custom value for the cinder_store_ parameters, add the ExtraConfig settings to parameter_defaults and pass the custom values. For example:

ExtraConfig:
    glance::config::api_config:
      glance_store/cinder_store_auth_address:
        value: "%{hiera('glance::api::authtoken::auth_url')}/v3"
      glance_store/cinder_store_user_name:
        value: <user-name>
      glance_store/cinder_store_password:
        value: "%{hiera('glance::api::authtoken::password')}"
      glance_store/cinder_store_project_name:
        value: "%{hiera('glance::api::authtoken::project_name')}"

18.6. Configuring Third Party Storage

The director include a couple of environment files to help configure third-party storage providers. This includes:

Dell EMC Storage Center

Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.

See the Dell Storage Center Back End Guide for full configuration information.

Dell EMC PS Series

Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.

See the Dell EMC PS Series Back End Guide for full configuration information.

NetApp Block Storage

Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.

The environment file is located at /usr/share/openstack-tripleo-heat-templates/environments/cinder-netapp-config.yaml.

See the NetApp Block Storage Back End Guide for full configuration information.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.