Chapter 6. Configuring the Image service (glance)
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.
You can configure the following back ends (stores) for the Image service:
- RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
- Block Storage (cinder).
- Object Storage (swift).
- NFS.
- RBD multistore. You can use multiple stores with distributed edge architecture so that you can have an image pool at every edge site.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
6.1. Configuring a Block Storage back end for the Image service
You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
- Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Block Storage service as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = servicecinder_catalog_info volumev3::publicURL ...
-
Set
replicas
to3
for high availability across APIs.
-
Set
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.2. Configuring an Object Storage back end
You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Object Storage service as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
-
Set
replicas
to3
for high availability across APIs.
-
Set
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.3. Configuring an NFS back end
You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.
If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:
- Use a reliable production-grade NFS back end.
-
Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a
NetworkAttachmentDefinition
custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server. - Set export permissions. Write permissions must be present in the shared file system that you use as a store.
Limitations
In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:
- Set server-side mount options.
-
Use
/etc/nfsmount.conf
. - Mount NFS volumes by using PersistentVolumes, which have mount options.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theextraMounts
parameter in thespec
section to add the export path and IP address of the NFS share. The path is mapped to/var/lib/glance/images
, where the Image service API stores and retrieves images:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack ... spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address> name: r1 region: r1 ...
-
Replace
<nfs_export_path>
with the export path of your NFS share. -
Replace
<nfs_ip_address>
with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
-
Replace
Add the following parameters to the
glance
template to configure NFS as the back end:... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack ...
Set
replicas
to3
for high availability across APIs.NoteWhen you configure an NFS back end, you must set the
type
tosingle
. By default, the Image service has asplit
deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. Thesplit
deployment type is invalid for afile
back end because different pods access the same file share.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.4. Configuring multistore for a single Image service API instance
You can configure the Image service (glance) with multiple storage back ends.
To configure multiple back ends for a single Image service API (glanceAPI
) instance, you set the enabled_backends
parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:
-
file
-
http
-
rbd
-
swift
-
cinder
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back ends, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the parameters to theglance
template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:... spec: glance: template: customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift ...
Specify the back end to use as the default back end. In the following example, the default back end is
ceph-1
:... customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift [glance_store] default_backend = ceph-1 ...
Add the configuration for each back end type you want to use:
Add the configuration for the first Ceph RBD store,
ceph-0
:... customServiceConfig: | [DEFAULT] ... [ceph-0] rbd_store_ceph_conf = /etc/ceph/ceph-0.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack ...
Add the configuration for the second Ceph RBD store,
ceph-1
:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] rbd_store_ceph_conf = /etc/ceph/ceph-1.conf store_description = "RBD backend 1" rbd_store_pool = images rbd_store_user = openstack ...
Add the configuration for the Object Storage service store,
swift-0
:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] ... [swift-0] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.5. Configuring multiple Image service API instances
You can deploy multiple Image service API (glanceAPI
) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI
instances, they are orchestrated by the same glance-operator
, but you can connect them to a single back end or to different back ends.
Multiple glanceAPI
instances inherit the same configuration from the main customServiceConfig
parameter in your OpenStackControlPlane
CR file. You use the extraMounts
parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.
You can also deploy multiple glanceAPI
instances in an availability zone (AZ) to serve different workloads in that AZ.
You can only register one glanceAPI
instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint
parameter in your OpenStackControlPlane
CR file.
For information about adding and decommissioning glanceAPIs
, see Performing operations with the Image service (glance).
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theglanceAPIs
parameter to theglance
template to configure multipleglanceAPI
instances. In the following example, you create threeglanceAPI
instances that are namedapi0
,api1
, andapi2
:... spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseUser: glance keystoneEndpoint: api0 glanceAPIs: api0: replicas: 1 api1: replicas: 1 api2: replicas: 1 ...
-
api0
is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations. -
api1
andapi2
are not default endpoints, but they are active APIs that users can use for image uploads by specifying the--os-image-url
parameter when they upload an image. -
You can update the
keystoneEndpoint
parameter to change the default endpoint in the Keystone catalog.
-
Add the
extraMounts
parameter to connect the threeglanceAPI
instances to a different back end. In the following example, you connectapi0
,api1
, andapi2
to three different Ceph Storage clusters that are namedceph0
,ceph1
, andceph2
:spec: glance: template: customServiceConfig: | [DEFAULT] ... extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: "/etc/ceph" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: "/etc/ceph" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: "/etc/ceph" readOnly: true ...
-
Replace
<secret_name>
with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specificglanceAPI
, for example,ceph-conf-files-0
for theceph0
cluster.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.6. Split and single Image service API layouts
By default, the Image service (glance) has a split
deployment type:
- An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
- An internal API service, which is accessible only through the internal endpoint for the Identity service
The split
deployment type is invalid for an NFS or file
back end because different pods access the same file share. When you configure an NFS or file
back end, you must set the type
to single
in your OpenStackControlPlane
CR.
Split layout example
In the following example of a split
layout type in an edge deployment, two glanceAPI
instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.
... spec: glance: template: customServiceConfig: | [DEFAULT] ... keystoneEndpoint: api0 glanceAPIs: api0: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd replicas: 1 type: split api1: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift replicas: 1 type: split ...
Single layout example
In the following example of a single
layout type in an NFS back-end configuration, different pods access the same file share:
... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack glanceAPIs: ...
-
Set
replicas
to3
for high availability across APIs.
6.7. Configuring multistore with edge architecture
When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.
The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.
With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide.
When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.
Refer to the following requirements to use images with edge sites:
- A copy of each image must exist in the Image service at the central location.
- You must copy images from an edge site to the central location before you can copy them to other edge sites.
- You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
- RBD must be the storage driver for the Image, Compute, and Block Storage services.
For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.