Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 6. Configuring Compute service storage

download PDF

You create an instance from a base image, which the Compute service copies from the Image (glance) service, and caches locally on the Compute nodes. The instance disk, which is the back end for the instance, is also based on the base image.

You can configure the Compute service to store ephemeral instance disk data locally on the host Compute node or remotely on either an NFS share or Ceph cluster. Alternatively, you can also configure the Compute service to store instance disk data in persistent storage provided by the Block Storage (Cinder) service.

You can configure image caching for your environment, and configure the performance and security of the instance disks. You can also configure the Compute service to download images directly from the RBD image repository without using the Image service API, when the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end.

6.1. Configuration options for image caching

Use the parameters detailed in the following table to configure how the Compute service implements and manages an image cache on Compute nodes.

Table 6.1. Compute (nova) service image cache parameters
Configuration methodParameterDescription

Puppet

nova::compute::image_cache::manager_interval

Specifies the number of seconds to wait between runs of the image cache manager, which manages base image caching on Compute nodes. The Compute service uses this period to perform automatic removal of unused cached images when nova::compute::image_cache::remove_unused_base_images is set to True.

Set to 0 to run at the default metrics interval of 60 seconds (not recommended). Set to -1 to disable the image cache manager.

Default: 2400

Puppet

nova::compute::image_cache::precache_concurrency

Specifies the maximum number of Compute nodes that can pre-cache images in parallel.

Note
  • Setting this parameter to a high number can cause slower pre-cache performance and might result in a DDoS on the Image service.
  • Setting this parameter to a low number reduces the load on the Image service, but can cause longer runtime to completion as the pre-cache is performed as a more sequential operation.

Default: 1

Puppet

nova::compute::image_cache::remove_unused_base_images

Set to True to automatically remove unused base images from the cache at intervals configured by using manager_interval. Images are defined as unused if they have not been accessed during the time specified by using NovaImageCacheTTL.

Default: True

Puppet

nova::compute::image_cache::remove_unused_resized_minimum_age_seconds

Specifies the minimum age that an unused resized base image must be to be removed from the cache, in seconds. Unused resized base images younger than this will not be removed. Set to undef to disable.

Default: 3600

Puppet

nova::compute::image_cache::subdirectory_name

Specifies the name of the folder where cached images are stored, relative to $instances_path.

Default: _base

Heat

NovaImageCacheTTL

Specifies the length of time in seconds that the Compute service should continue caching an image when it is no longer used by any instances on the Compute node. The Compute service deletes images cached on the Compute node that are older than this configured lifetime from the cache directory until they are needed again.

Default: 86400 (24 hours)

6.2. Configuration options for instance ephemeral storage properties

Use the parameters detailed in the following table to configure the performance and security of ephemeral storage used by instances.

Note

Red Hat OpenStack Platform (RHOSP) does not support the LVM image type for instance disks. Therefore, the [libvirt]/volume_clear configuration option, which wipes ephemeral disks when instances are deleted, is not supported because it only applies when the instance disk image type is LVM.

Table 6.2. Compute (nova) service instance ephemeral storage parameters
Configuration methodParameterDescription

Puppet

nova::compute::default_ephemeral_format

Specifies the default format that is used for a new ephemeral volume. Set to one of the following valid values:

  • ext2
  • ext3
  • ext4

The ext4 format provides much faster initialization times than ext3 for new, large disks.

Default: ext4

Puppet

nova::compute::force_raw_images

Set to True to convert non-raw cached base images to raw format. The raw image format uses more space than other image formats, such as qcow2. Non-raw image formats use more CPU for compression. When set to False, the Compute service removes any compression from the base image during compression to avoid CPU bottlenecks. Set to False if you have a system with slow I/O or low available space to reduce input bandwidth.

Default: True

Puppet

nova::compute::use_cow_images

Set to True to use CoW (Copy on Write) images in qcow2 format for instance disks. With CoW, depending on the backing store and host caching, there might be better concurrency achieved by having each instance operate on its own copy.

Set to False to use the raw format. Raw format uses more space for common parts of the disk image.

Default: True

Puppet

nova::compute::libvirt::preallocate_images

Specifies the preallocation mode for instance disks. Set to one of the following valid values:

  • none - No storage is provisioned at instance start.
  • space - The Compute service fully allocates storage at instance start by running fallocate(1) on the instance disk images. This reduces CPU overhead and file fragmentation, improves I/O performance, and helps guarantee the required disk space.

Default: none

Hieradata override

DEFAULT/resize_fs_using_block_device

Set to True to enable direct resizing of the base image by accessing the image over a block device. This is only necessary for images with older versions of cloud-init that cannot resize themselves.

This parameter is not enabled by default because it enables the direct mounting of images which might otherwise be disabled for security reasons.

Default: False

Hieradata override

[libvirt]/images_type

Specifies the image type to use for instance disks. Set to one of the following valid values:

  • raw
  • qcow2
  • flat
  • rbd
  • default
Note

RHOSP does not support the LVM image type for instance disks.

When set to a valid value other than default the image type supersedes the configuration of use_cow_images. If default is specified, the configuration of use_cow_images determines the image type:

  • If use_cow_images is set to True (default) then the image type is qcow2.
  • If use_cow_images is set to False then the image type is Flat.

The default value is determined by the configuration of NovaEnableRbdBackend:

  • NovaEnableRbdBackend: False

    Default: default

  • NovaEnableRbdBackend: True

    Default: rbd

6.3. Configuring the maximum number of storage devices to attach to one instance

By default, you can attach an unlimited number of storage devices to a single instance. Attaching a large number of disk devices to an instance can degrade performance on the instance. You can tune the maximum number of devices that can be attached to an instance based on the boundaries of what your environment can support. The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices. You can attach a maximum of 500 disk devices to instances with machine type Q35.

Note

From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.

Warning
  • Changing the value of the NovaMaxDiskDevicesToAttach parameter on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you change NovaMaxDiskDevicesToAttach to 20, a request to rebuild instance A will fail.
  • During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
Note

The configured maximum number of storage devices is not enforced on shelved offloaded instances, as they have no Compute node.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Create a new environment file, or open an existing environment file.
  4. Configure the limit on the maximum number of storage devices that can be attached to a single instance by adding the following configuration to your environment file:

    parameter_defaults:
      ...
      NovaMaxDiskDevicesToAttach: <max_device_limit>
      ...
    • Replace <max_device_limit> with the maximum number of storage devices that can be attached to an instance.
  5. Save the updates to your environment file.
  6. Add your environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
     -e [your environment files] \
     -e /home/stack/templates/<environment_file>.yaml

6.4. Configuring shared instance storage

By default, when you launch an instance, the instance disk is stored as a file in the instance directory, /var/lib/nova/instances. You can configure an NFS storage backend for the Compute service to store these instance files on shared NFS storage.

Prerequisites

  • You must be using NFSv4 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS. For more information, see the Red Hat Knowledgebase solution RHOS NFSv4-Only Support Notes.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file:

    [stack@director ~]$ source ~/stackrc
  3. Create an environment file to configure shared instance storage, for example, nfs_instance_disk_backend.yaml.
  4. To configure an NFS backend for instance files, add the following configuration to nfs_instance_disk_backend.yaml:

    parameter_defaults:
      ...
      NovaNfsEnabled: True
      NovaNfsShare: <nfs_share>

    Replace <nfs_share> with the NFS share directory to mount for instance file storage, for example, '192.168.122.1:/export/nova' or '192.168.24.1:/var/nfs'. If using IPv6, use both double and single-quotes, e.g. "'[fdd0::1]:/export/nova'".

  5. Optional: The default mount SELinux context for NFS storage when NFS backend storage is enabled is 'context=system_u:object_r:nfs_t:s0'. Add the following parameter to amend the mount options for the NFS instance file storage mount point:

    parameter_defaults:
      ...
      NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>'

    Replace <additional_nfs_mount_options> with a comma-separated list of the mount options you want to use for NFS instance file storage. For more information on the available mount options, see the mount man page:

    $ man 8 mount.
  6. Save the updates to your environment file.
  7. Add your new environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/nfs_instance_disk_backend.yaml

6.5. Configuring image downloads directly from Red Hat Ceph RADOS Block Device (RBD)

When the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end, and the Compute service uses local file-based ephemeral storage, you can configure the Compute service to download images directly from the RBD image repository without using the Image service API. This reduces the time it takes to download an image to the Compute node image cache at instance boot time, which improves instance launch time.

Prerequisites

  • The Image service back end is a Red Hat Ceph RADOS Block Device (RBD).
  • The Compute service is using a local file-based ephemeral store for the image cache and instance disks.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Open your Compute environment file.
  3. To download images directly from the RBD back end, add the following configuration to your Compute environment file:

    parameter_defaults:
      ComputeParameters:
        NovaGlanceEnableRbdDownload: True
        NovaEnableRbdBackend: False
        ...
  4. Optional: If the Image service is configured to use multiple Red Hat Ceph Storage back ends, add the following configuration to your Compute environment file to identify the RBD back end to download images from:

    parameter_defaults:
      ComputeParameters:
        NovaGlanceEnableRbdDownload: True
        NovaEnableRbdBackend: False
        NovaGlanceRbdDownloadMultistoreID: <rbd_backend_id>
        ...

    Replace <rbd_backend_id> with the ID used to specify the back end in the GlanceMultistoreConfig configuration, for example rbd2_store.

  5. Add the following configuration to your Compute environment file to specify the Image service RBD back end, and the maximum length of time that the Compute service waits to connect to the Image service RBD back end, in seconds:

    parameter_defaults:
      ComputeExtraConfig:
        nova::config::nova_config:
          glance/rbd_user:
            value: 'glance'
          glance/rbd_pool:
            value: 'images'
          glance/rbd_ceph_conf:
            value: '/etc/ceph/ceph.conf'
          glance/rbd_connect_timeout:
            value: '5'
  6. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
     -e [your environment files] \
     -e /home/stack/templates/<compute_environment_file>.yaml
  7. To verify that the Compute service downloads images directly from RBD, create an instance then check the instance debug log for the entry "Attempting to export RBD image:".

6.6. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.