Chapter 6. Configuring Compute service storage
You create an instance from a base image, which the Compute service copies from the Image (glance) service, and caches locally on the Compute nodes. The instance disk, which is the back end for the instance, is also based on the base image.
You can configure the Compute service to store ephemeral instance disk data locally on the host Compute node or remotely on either an NFS share or Ceph cluster. Alternatively, you can also configure the Compute service to store instance disk data in persistent storage provided by the Block Storage (Cinder) service.
You can configure image caching for your environment, and configure the performance and security of the instance disks. You can also configure the Compute service to download images directly from the RBD image repository without using the Image service API, when the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end.
6.1. Configuration options for image caching
Use the parameters detailed in the following table to configure how the Compute service implements and manages an image cache on Compute nodes.
Configuration method | Parameter | Description |
---|---|---|
Puppet |
|
Specifies the number of seconds to wait between runs of the image cache manager, which manages base image caching on Compute nodes. The Compute service uses this period to perform automatic removal of unused cached images when
Set to
Default: |
Puppet |
| Specifies the maximum number of Compute nodes that can pre-cache images in parallel. Note
Default: |
Puppet |
|
Set to
Default: |
Puppet |
|
Specifies the minimum age that an unused resized base image must be to be removed from the cache, in seconds. Unused resized base images younger than this will not be removed. Set to
Default: |
Puppet |
|
Specifies the name of the folder where cached images are stored, relative to
Default: |
Heat |
| Specifies the length of time in seconds that the Compute service should continue caching an image when it is no longer used by any instances on the Compute node. The Compute service deletes images cached on the Compute node that are older than this configured lifetime from the cache directory until they are needed again. Default: 86400 (24 hours) |
6.2. Configuration options for instance ephemeral storage properties
Use the parameters detailed in the following table to configure the performance and security of ephemeral storage used by instances.
Red Hat OpenStack Platform (RHOSP) does not support the LVM image type for instance disks. Therefore, the [libvirt]/volume_clear
configuration option, which wipes ephemeral disks when instances are deleted, is not supported because it only applies when the instance disk image type is LVM.
Configuration method | Parameter | Description |
---|---|---|
Puppet |
| Specifies the default format that is used for a new ephemeral volume. Set to one of the following valid values:
The
Default: |
Puppet |
|
Set to
Default: |
Puppet |
|
Set to
Set to
Default: |
Puppet |
| Specifies the preallocation mode for instance disks. Set to one of the following valid values:
Default: |
Hieradata override |
|
Set to This parameter is not enabled by default because it enables the direct mounting of images which might otherwise be disabled for security reasons.
Default: |
Hieradata override |
| Specifies the image type to use for instance disks. Set to one of the following valid values:
Note RHOSP does not support the LVM image type for instance disks.
When set to a valid value other than
The default value is determined by the configuration of
|
6.3. Configuring the maximum number of storage devices to attach to one instance
By default, you can attach an unlimited number of storage devices to a single instance. Attaching a large number of disk devices to an instance can degrade performance on the instance. You can tune the maximum number of devices that can be attached to an instance based on the boundaries of what your environment can support. The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices. You can attach a maximum of 500 disk devices to instances with machine type Q35.
From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts
. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi
or hw_scsi_model=virtio-scsi
image property. For more information, see Metadata properties for virtual hardware.
-
Changing the value of the
NovaMaxDiskDevicesToAttach
parameter on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you changeNovaMaxDiskDevicesToAttach
to 20, a request to rebuild instance A will fail. - During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
The configured maximum number of storage devices is not enforced on shelved offloaded instances, as they have no Compute node.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
- Create a new environment file, or open an existing environment file.
Configure the limit on the maximum number of storage devices that can be attached to a single instance by adding the following configuration to your environment file:
parameter_defaults: ... NovaMaxDiskDevicesToAttach: <max_device_limit> ...
-
Replace
<max_device_limit>
with the maximum number of storage devices that can be attached to an instance.
-
Replace
- Save the updates to your environment file.
Add your environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<environment_file>.yaml
6.5. Configuring image downloads directly from Red Hat Ceph RADOS Block Device (RBD)
When the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end, and the Compute service uses local file-based ephemeral storage, you can configure the Compute service to download images directly from the RBD image repository without using the Image service API. This reduces the time it takes to download an image to the Compute node image cache at instance boot time, which improves instance launch time.
Prerequisites
- The Image service back end is a Red Hat Ceph RADOS Block Device (RBD).
- The Compute service is using a local file-based ephemeral store for the image cache and instance disks.
Procedure
-
Log in to the undercloud as the
stack
user. - Open your Compute environment file.
To download images directly from the RBD back end, add the following configuration to your Compute environment file:
parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False ...
Optional: If the Image service is configured to use multiple Red Hat Ceph Storage back ends, add the following configuration to your Compute environment file to identify the RBD back end to download images from:
parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False NovaGlanceRbdDownloadMultistoreID: <rbd_backend_id> ...
Replace
<rbd_backend_id>
with the ID used to specify the back end in theGlanceMultistoreConfig
configuration, for examplerbd2_store
.Add the following configuration to your Compute environment file to specify the Image service RBD back end, and the maximum length of time that the Compute service waits to connect to the Image service RBD back end, in seconds:
parameter_defaults: ComputeExtraConfig: nova::config::nova_config: glance/rbd_user: value: 'glance' glance/rbd_pool: value: 'images' glance/rbd_ceph_conf: value: '/etc/ceph/ceph.conf' glance/rbd_connect_timeout: value: '5'
Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
- To verify that the Compute service downloads images directly from RBD, create an instance then check the instance debug log for the entry "Attempting to export RBD image:".