Chapter 6. Configuring the Image service (glance)
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.
You can configure the following back ends (stores) for the Image service:
- RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage.
- RBD multistore. You can use multiple stores only with distributed edge architecture or distributed zones so that you can have an image pool at every edge site or zone.
- Block Storage (cinder).
- Block Storage (cinder) multistore. You can use multiple stores only with distributed zones so that you can have an image pool in every zone.
- Object Storage (swift).
- S3.
- NFS.
For more information about Red Hat Ceph Storage, distributed edge architecture, and distributed zones, see the following documentation:
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
6.1. Configuring a Block Storage back end for the Image service Copy linkLink copied to clipboard!
You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
-
Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (
glanceAPI
) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Block Storage service as the back end for the Image service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set
replicas
to3
for high availability across APIs. -
Replace
<backend_name>
with the name of the defaultcinder
back end, for examplenfs_store
.
-
Set
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
Additional resources
6.1.1. Enabling the creation of multiple instances or volumes from a volume-backed image Copy linkLink copied to clipboard!
When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.
When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.
By default, only the Block Storage project administrator can create volume types.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:
openstack volume type create glance-multiattach openstack volume type set --property multiattach="<is> True" glance-multiattach
$ openstack volume type create glance-multiattach $ openstack volume type set --property multiattach="<is> True" glance-multiattach
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the
volume_backend_name
property to this volume type. You might need to ask your Block Storage administrator for the correctvolume_backend_name
for your multi-attach volume type. For this example, we are usingiscsi
as the back-end name.openstack volume type set glance-multiattach --property volume_backend_name=iscsi
$ openstack volume type set glance-multiattach --property volume_backend_name=iscsi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclient
pod:exit
$ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. In theglance
template, add the following parameter to the end of thecustomServiceConfig
,[<backend_name>]
section to configure the Image service to use the Block Storage multi-attach volume type:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backend_name>
with the name of the default back end.
-
Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
Additional resources
6.1.2. Parameters for configuring the Block Storage back end Copy linkLink copied to clipboard!
You can add the following parameters to the end of the customServiceConfig
, [<backend_name>]
section of the glance
template in your OpenStackControlPlane
CR file.
Parameter = Default value | Type | Description of use |
---|---|---|
| boolean value |
Set to |
| boolean value |
Set to |
| string value | Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share. Note This parameter is only applicable when using an NFS Block Storage back end for the Image service. |
| boolean value |
Set to
The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to Note This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported. |
| string value | Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type. When this parameter is not used, volumes are created by using the default Block Storage volume type. |
6.2. Configuring an Object Storage back end Copy linkLink copied to clipboard!
You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Object Storage service as the back end:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set
replicas
to3
for high availability across APIs. -
Replace
<backend_name>
with the name of the default back end.
-
Set
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.3. Configuring an S3 back end Copy linkLink copied to clipboard!
To configure the Image service (glance) with S3 as the storage back end, you require the following details:
- S3 access key
- S3 secret key
- S3 endpoint
For security, these details are stored in a Kubernetes secret.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
-
Create a configuration file, for example,
glance-s3.conf
, where you can store the S3 configuration details. Generate the secret and access keys for your S3 storage.
If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:
radosgw-admin user create --uid="<user_1>" \ --display-name="<Jane Doe>"
$ radosgw-admin user create --uid="<user_1>" \ --display-name="<Jane Doe>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<user_1>
with the user ID. -
Replace
<Jane Doe>
with a display name for the user.
-
Replace
If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:
openstackclient openstack credential create --type ec2 \ --project admin admin \ '{"access": "<access_key>", "secret": "<secret_key>"}'
$ openstackclient openstack credential create --type ec2 \ --project admin admin \ '{"access": "<access_key>", "secret": "<secret_key>"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<access_key>
with the user ID. -
Replace
<secret_key
> with a display name for the user.
-
Replace
Add the S3 configuration details to your
glance-s3.conf
configuration file:[default_backend] s3_store_host = <_s3_endpoint_> s3_store_access_key = <_s3_access_key_> s3_store_secret_key = <_s3_secret_key_> s3_store_bucket = <_s3_bucket_>
[default_backend] s3_store_host = <_s3_endpoint_> s3_store_access_key = <_s3_access_key_> s3_store_secret_key = <_s3_secret_key_> s3_store_bucket = <_s3_bucket_>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<_s3_endpoint_>
with the host where the S3 server is listening. This option can contain a DNS name, for example,_s3.amazonaws.com_
, or an IP address. -
Replace
<_s3_access_key_>
and<_s3_secret_key_>
with the data generated by the S3 back end. -
Replace
<_s3_bucket_>
with the bucket name where you want to store images in the S3 back end. If you sets3_store_create_bucket_on_put
toTrue
in yourOpenStackControlPlane
CR file, the bucket name is created automatically, even if the bucket does not already exist.
-
Replace
Create a secret from the
glance-s3.conf
file:oc create secret generic glances3 \ --from-file s3glance.conf
$ oc create secret generic glances3 \ --from-file s3glance.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure S3 as the back end:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backend_name>
with the name of the default back end. -
Optional: If your S3 storage is accessed by HTTPS, you must set the
s3_store_cacert
field and point it to theca-bundle.crt
path. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in/etc/pki/tls/certs/ca-bundle.crt
. -
Optional: Set
s3_store_large_object_size
to0
to force multipart upload when you create an image in the S3 back end from a Block Storage service (cinder) volume.
-
Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.4. Configuring an NFS back end Copy linkLink copied to clipboard!
You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.
If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:
- Use a reliable production-grade NFS back end.
-
Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a
NetworkAttachmentDefinition
custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server. - Set export permissions. Write permissions must be present in the shared file system that you use as a store.
Limitations
In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:
- Set server-side mount options.
-
Use
/etc/nfsmount.conf
. - Mount NFS volumes by using PersistentVolumes, which have mount options.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theextraMounts
parameter in thespec
section to add the export path and IP address of the NFS share. The path is mapped to/var/lib/glance/images
, where the Image service API (glanceAPI
) stores and retrieves images:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nfs_export_path>
with the export path of your NFS share. -
Replace
<nfs_ip_address>
with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
-
Replace
Add the following parameters to the
glance
template to configure NFS as the back end:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set
replicas
to3
for high availability across APIs. Replace
<backend_name>
with the name of the default back end.NoteWhen you configure an NFS back end, you must set the
type
tosingle
. By default, the Image service has asplit
deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. Thesplit
deployment type is invalid for afile
back end because different pods access the same file share.
-
Set
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.5. Configuring multistore for a single Image service API instance Copy linkLink copied to clipboard!
You can configure the Image service (glance) with multiple storage back ends.
To configure multiple back ends for a single Image service API (glanceAPI
) instance, you set the enabled_backends
parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:
-
file
-
http
-
rbd
-
swift
-
cinder
-
s3
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back ends, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the parameters to theglance
template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the back end to use as the default back end. In the following example, the default back end is
ceph-1
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the configuration for each back end type you want to use:
Add the configuration for the first Ceph RBD store,
ceph-0
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the configuration for the second Ceph RBD store,
ceph-1
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the configuration for the Object Storage service store,
swift-0
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.6. Configuring multiple Image service API instances Copy linkLink copied to clipboard!
You can deploy multiple Image service API (glanceAPI
) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI
instances, they are orchestrated by the same glance-operator
, but you can connect them to a single back end or to different back ends.
Multiple glanceAPI
instances inherit the same configuration from the main customServiceConfig
parameter in your OpenStackControlPlane
CR file. You use the extraMounts
parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.
You can also deploy multiple glanceAPI
instances in an availability zone (AZ) to serve different workloads in that AZ.
You can only register one glanceAPI
instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint
parameter in your OpenStackControlPlane
CR file.
For information about adding and decommissioning glanceAPIs
, see Performing operations with the Image service (glance).
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theglanceAPIs
parameter to theglance
template to configure multipleglanceAPI
instances. In the following example, you create threeglanceAPI
instances that are namedapi0
,api1
, andapi2
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backend_name>
with the name of the default back end. -
api0
is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations. -
api1
andapi2
are not default endpoints, but they are active APIs that users can use for image uploads by specifying the--os-image-url
parameter when they upload an image. -
You can update the
keystoneEndpoint
parameter to change the default endpoint in the Keystone catalog.
-
Replace
Add the
extraMounts
parameter to connect the threeglanceAPI
instances to a different back end. In the following example, you connectapi0
,api1
, andapi2
to three different Ceph Storage clusters that are namedceph0
,ceph1
, andceph2
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<secret_name>
with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specificglanceAPI
, for example,ceph-conf-files-0
for theceph0
cluster.
-
Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.7. Split and single Image service API layouts Copy linkLink copied to clipboard!
By default, the Image service (glance) has a split
deployment type:
- An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
- An internal API service, which is accessible only through the internal endpoint for the Identity service
The split
deployment type is invalid for an NFS or file
back end because different pods access the same file share. When you configure an NFS or file
back end, you must set the type
to single
in your OpenStackControlPlane
CR.
Split layout example
In the following example of a split
layout type in an edge deployment, two glanceAPI
instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.
-
Replace
<backend_name>
with the name of the default back end.
Single layout example
In the following example of a single
layout type in an NFS back-end configuration, different pods access the same file share:
-
Set
replicas
to3
for high availability across APIs. -
Replace
<backend_name>
with the name of the default back end.
6.8. Configuring multistore with edge architecture Copy linkLink copied to clipboard!
When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.
The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.
With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide.
When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.
Refer to the following requirements to use images with edge sites:
- A copy of each image must exist in the Image service at the central location.
- You must copy images from an edge site to the central location before you can copy them to other edge sites.
- You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
- RBD must be the storage driver for the Image, Compute, and Block Storage services.
For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.