Chapter 6. Configuring the Image service (glance)


The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.

You can configure the following back ends (stores) for the Image service:

  • RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage.
  • RBD multistore. You can use multiple stores only with distributed edge architecture or distributed zones so that you can have an image pool at every edge site or zone.
  • Block Storage (cinder).
  • Block Storage (cinder) multistore. You can use multiple stores only with distributed zones so that you can have an image pool in every zone.
  • Object Storage (swift).
  • S3.
  • NFS.

For more information about Red Hat Ceph Storage, distributed edge architecture, and distributed zones, see the following documentation:

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.

Prerequisites

  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
  • Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (glanceAPI) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Block Storage service as the back end for the Image service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:cinder
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            description = Default cinder backend
            cinder_store_auth_address = {{ .KeystoneInternalURL }}
            cinder_store_user_name = {{ .ServiceUser }}
            cinder_store_password = {{ .ServicePassword }}
            cinder_store_project_name = service
            cinder_catalog_info = volumev3::internalURL
            cinder_use_multipath = true
    ...
    Copy to Clipboard Toggle word wrap
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default cinder back end, for example nfs_store.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.

When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.

Note

By default, only the Block Storage project administrator can create volume types.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:

    $ openstack volume type create glance-multiattach
    $ openstack volume type set --property multiattach="<is> True"  glance-multiattach
    Copy to Clipboard Toggle word wrap

    If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name.

    $ openstack volume type set glance-multiattach --property volume_backend_name=iscsi
    Copy to Clipboard Toggle word wrap
  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard Toggle word wrap
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml. In the glance template, add the following parameter to the end of the customServiceConfig, [<backend_name>] section to configure the Image service to use the Block Storage multi-attach volume type:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
          ...
          [<backend_name>]
          ...
            cinder_volume_type = glance-multiattach
    ...
    Copy to Clipboard Toggle word wrap
    • Replace <backend_name> with the name of the default back end.
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can add the following parameters to the end of the customServiceConfig, [<backend_name>] section of the glance template in your OpenStackControlPlane CR file.

Expand
Table 6.1. Block Storage back-end parameters for the Image service
Parameter = Default valueTypeDescription of use

cinder_use_multipath = False

boolean value

Set to True when multipath is supported for your deployment.

cinder_enforce_multipath = False

boolean value

Set to True to abort the attachment of volumes for image transfer when multipath is not running.

cinder_mount_point_base = /var/lib/glance/mnt

string value

Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share.

Note

This parameter is only applicable when using an NFS Block Storage back end for the Image service.

cinder_do_extend_attached = False

boolean value

Set to True when the images are > 1 GB to optimize the Block Storage process of creating the required volume sizes for each image.

The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to False, the incremental process of extending the volume is very time-consuming, requiring the volume to be subsequently detached, extended by 1 GB if it is still smaller than the image size and then reattached. By setting this parameter to True, this process is optimized by performing these consecutive 1 GB volume extensions while the volume is attached.

Note

This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported.

cinder_volume_type = __DEFAULT__

string value

Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type.

When this parameter is not used, volumes are created by using the default Block Storage volume type.

6.2. Configuring an Object Storage back end

You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Object Storage service as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:swift
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            swift_store_create_container_on_put = True
            swift_store_auth_version = 3
            swift_store_auth_address = {{ .KeystoneInternalURL }}
            swift_store_key = {{ .ServicePassword }}
            swift_store_user = service:glance
            swift_store_endpoint_type = internalURL
    ...
    Copy to Clipboard Toggle word wrap
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default back end.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.3. Configuring an S3 back end

To configure the Image service (glance) with S3 as the storage back end, you require the following details:

  • S3 access key
  • S3 secret key
  • S3 endpoint

For security, these details are stored in a Kubernetes secret.

Prerequisites

Procedure

  1. Create a configuration file, for example, glance-s3.conf, where you can store the S3 configuration details.
  2. Generate the secret and access keys for your S3 storage.

    • If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:

      $ radosgw-admin user create --uid="<user_1>" \
      --display-name="<Jane Doe>"
      Copy to Clipboard Toggle word wrap
      • Replace <user_1> with the user ID.
      • Replace <Jane Doe> with a display name for the user.
    • If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:

      $ openstackclient openstack credential create --type ec2 \
      --project admin admin \
      '{"access": "<access_key>", "secret": "<secret_key>"}'
      Copy to Clipboard Toggle word wrap
      • Replace <access_key> with the user ID.
      • Replace <secret_key> with a display name for the user.
  3. Add the S3 configuration details to your glance-s3.conf configuration file:

    [default_backend]
    s3_store_host = <_s3_endpoint_>
    s3_store_access_key = <_s3_access_key_>
    s3_store_secret_key = <_s3_secret_key_>
    s3_store_bucket = <_s3_bucket_>
    Copy to Clipboard Toggle word wrap
    • Replace <_s3_endpoint_> with the host where the S3 server is listening. This option can contain a DNS name, for example, _s3.amazonaws.com_, or an IP address.
    • Replace <_s3_access_key_> and <_s3_secret_key_> with the data generated by the S3 back end.
    • Replace <_s3_bucket_> with the bucket name where you want to store images in the S3 back end. If you set s3_store_create_bucket_on_put to True in your OpenStackControlPlane CR file, the bucket name is created automatically, even if the bucket does not already exist.
  4. Create a secret from the glance-s3.conf file:

    $ oc create secret generic glances3  \
    --from-file s3glance.conf
    Copy to Clipboard Toggle word wrap
  5. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure S3 as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:s3
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            s3_store_create_bucket_on_put = True
            s3_store_bucket_url_format = "path"
            s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt
            s3_store_large_object_size = 0
          glanceAPIs:
            default:
              customServiceConfigSecrets:
              - glances3
    ...
    Copy to Clipboard Toggle word wrap
    • Replace <backend_name> with the name of the default back end.
    • Optional: If your S3 storage is accessed by HTTPS, you must set the s3_store_cacert field and point it to the ca-bundle.crt path. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in /etc/pki/tls/certs/ca-bundle.crt.
    • Optional: Set s3_store_large_object_size to 0 to force multipart upload when you create an image in the S3 back end from a Block Storage service (cinder) volume.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.4. Configuring an NFS back end

You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.

If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:

  • Use a reliable production-grade NFS back end.
  • Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a NetworkAttachmentDefinition custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server.
  • Set export permissions. Write permissions must be present in the shared file system that you use as a store.

Limitations

  • In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:

    • Set server-side mount options.
    • Use /etc/nfsmount.conf.
    • Mount NFS volumes by using PersistentVolumes, which have mount options.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the extraMounts parameter in the spec section to add the export path and IP address of the NFS share. The path is mapped to /var/lib/glance/images, where the Image service API (glanceAPI) stores and retrieves images:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    ...
    spec:
      extraMounts:
      - extraVol:
        - extraVolType: Nfs
          mounts:
          - mountPath: /var/lib/glance/images
            name: nfs
          propagation:
          - Glance
          volumes:
          - name: nfs
            nfs:
              path: <nfs_export_path>
              server: <nfs_ip_address>
        name: r1
        region: r1
    ...
    Copy to Clipboard Toggle word wrap
    • Replace <nfs_export_path> with the export path of your NFS share.
    • Replace <nfs_ip_address> with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
  2. Add the following parameters to the glance template to configure NFS as the back end:

    ...
    spec:
      extraMounts:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              type: single
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:file
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            filesystem_store_datadir = /var/lib/glance/images
          databaseInstance: openstack
    ...
    Copy to Clipboard Toggle word wrap
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default back end.

      Note

      When you configure an NFS back end, you must set the type to single. By default, the Image service has a split deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. The split deployment type is invalid for a file back end because different pods access the same file share.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Image service (glance) with multiple storage back ends.

To configure multiple back ends for a single Image service API (glanceAPI) instance, you set the enabled_backends parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:

  • file
  • http
  • rbd
  • swift
  • cinder
  • s3

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the parameters to the glance template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
    ...
    Copy to Clipboard Toggle word wrap
  2. Specify the back end to use as the default back end. In the following example, the default back end is ceph-1:

    ...
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
            [glance_store]
            default_backend = ceph-1
    ...
    Copy to Clipboard Toggle word wrap
  3. Add the configuration for each back end type you want to use:

    • Add the configuration for the first Ceph RBD store, ceph-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              rbd_store_ceph_conf = /etc/ceph/ceph-0.conf
              store_description = "RBD backend"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
      Copy to Clipboard Toggle word wrap
    • Add the configuration for the second Ceph RBD store, ceph-1:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              rbd_store_ceph_conf = /etc/ceph/ceph-1.conf
              store_description = "RBD backend 1"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
      Copy to Clipboard Toggle word wrap
    • Add the configuration for the Object Storage service store, swift-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              ...
              [swift-0]
              swift_store_create_container_on_put = True
              swift_store_auth_version = 3
              swift_store_auth_address = {{ .KeystoneInternalURL }}
              swift_store_key = {{ .ServicePassword }}
              swift_store_user = service:glance
              swift_store_endpoint_type = internalURL
      ...
      Copy to Clipboard Toggle word wrap
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can deploy multiple Image service API (glanceAPI) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI instances, they are orchestrated by the same glance-operator, but you can connect them to a single back end or to different back ends.

Multiple glanceAPI instances inherit the same configuration from the main customServiceConfig parameter in your OpenStackControlPlane CR file. You use the extraMounts parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.

You can also deploy multiple glanceAPI instances in an availability zone (AZ) to serve different workloads in that AZ.

Note

You can only register one glanceAPI instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint parameter in your OpenStackControlPlane CR file.

For information about adding and decommissioning glanceAPIs, see Performing operations with the Image service (glance).

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the glanceAPIs parameter to the glance template to configure multiple glanceAPI instances. In the following example, you create three glanceAPI instances that are named api0, api1, and api2:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:rbd
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
          databaseInstance: openstack
          databaseUser: glance
          keystoneEndpoint: api0
          glanceAPIs:
            api0:
              replicas: 1
            api1:
              replicas: 1
            api2:
              replicas: 1
        ...
    Copy to Clipboard Toggle word wrap
    • Replace <backend_name> with the name of the default back end.
    • api0 is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations.
    • api1 and api2 are not default endpoints, but they are active APIs that users can use for image uploads by specifying the --os-image-url parameter when they upload an image.
    • You can update the keystoneEndpoint parameter to change the default endpoint in the Keystone catalog.
  2. Add the extraMounts parameter to connect the three glanceAPI instances to a different back end. In the following example, you connect api0, api1, and api2 to three different Ceph Storage clusters that are named ceph0, ceph1, and ceph2:

    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            ...
          extraMounts:
            - name: api0
              region: r1
              extraVol:
                - propagation:
                  - api0
                  volumes:
                  - name: ceph0
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph0
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api1
              region: r1
              extraVol:
                - propagation:
                  - api1
                  volumes:
                  - name: ceph1
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph1
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api2
              region: r1
              extraVol:
                - propagation:
                  - api2
                  volumes:
                  - name: ceph2
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph2
                    mountPath: "/etc/ceph"
                    readOnly: true
    ...
    Copy to Clipboard Toggle word wrap
    • Replace <secret_name> with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specific glanceAPI, for example, ceph-conf-files-0 for the ceph0 cluster.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.7. Split and single Image service API layouts

By default, the Image service (glance) has a split deployment type:

  • An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
  • An internal API service, which is accessible only through the internal endpoint for the Identity service

The split deployment type is invalid for an NFS or file back end because different pods access the same file share. When you configure an NFS or file back end, you must set the type to single in your OpenStackControlPlane CR.

Split layout example

In the following example of a split layout type in an edge deployment, two glanceAPI instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.

...
spec:
  glance:
    template:
      customServiceConfig: |
        [DEFAULT]
...
  keystoneEndpoint: api0
  glanceAPIs:
    api0:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:rbd
      replicas: 1
      type: split
    api1:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:swift
      replicas: 1
      type: split
    ...
Copy to Clipboard Toggle word wrap
  • Replace <backend_name> with the name of the default back end.

Single layout example

In the following example of a single layout type in an NFS back-end configuration, different pods access the same file share:

...
spec:
  extraMounts:
    ...
    glance:
    template:
      glanceAPIs:
        default:
          type: single
          replicas: 3 # Configure back end; set to 3 when deploying service
      ...
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:file
        [glance_store]
        default_backend = <backend_name>
        [<backend_name>]
        filesystem_store_datadir = /var/lib/glance/images
      databaseInstance: openstack
      glanceAPIs:
...
Copy to Clipboard Toggle word wrap
  • Set replicas to 3 for high availability across APIs.
  • Replace <backend_name> with the name of the default back end.

6.8. Configuring multistore with edge architecture

When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.

The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.

With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide.

When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.

Refer to the following requirements to use images with edge sites:

  • A copy of each image must exist in the Image service at the central location.
  • You must copy images from an edge site to the central location before you can copy them to other edge sites.
  • You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
  • RBD must be the storage driver for the Image, Compute, and Block Storage services.

For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat