Chapter 1. The Image service (glance)
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.
1.1. Virtual machine image formats
A virtual machine (VM) image is a file that contains a virtual disk with a bootable OS installed. Red Hat OpenStack Platform (RHOSP) supports VM images in different formats.
The disk format of a VM image is the format of the underlying disk image. The container format indicates if the VM image is in a file format that also contains metadata about the VM.
When you add an image to the Image service (glance), you can set the disk or container format for your image to any of the values in the following tables by using the --disk-format
and --container-format
command options with the glance image-create
, glance image-create-via-import
, and glance image-update
commands. If you are not sure of the container format of your VM image, you can set it to bare
.
Format | Description |
---|---|
| Indicates an Amazon kernel image that is stored in the Image service. |
| Indicates an Amazon machine image that is stored in the Image service. |
| Indicates an Amazon ramdisk image that is stored in the Image service. |
| Sector-by-sector copy of the data on a disk, stored in a binary file. Although an ISO file is not normally considered a VM image format, these files contain bootable file systems with an installed operating system, and you use them in the same way as other VM image files. |
| A disk format supported and used by Virtuozzo to run OS containers. |
| Supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. |
| Unstructured disk image format. |
| Supported by VirtualBox VM monitor and QEMU emulator. |
| Virtual Hard Disk. Used by VM monitors from VMware, VirtualBox, and others. |
| Virtual Hard Disk v2. Disk image format with a larger storage capacity than VHD. |
| Virtual Machine Disk. Disk image format that allows incremental backups of data changes from the time of the last backup. |
Format | Description |
---|---|
| Indicates an Amazon kernel image that is stored in the Image service. |
| Indicates an Amazon machine image that is stored in the Image service. |
| Indicates an Amazon ramdisk image that is stored in the Image service. |
| Indicates there is no container or metadata envelope for the image. |
| Indicates a TAR archive of the file system of a Docker container that is stored in the Image service. |
| Indicates an Open Virtual Appliance (OVA) TAR archive file that is stored in the Image service. This file is stored in the Open Virtualization Format (OVF) container file. |
| OVF container file format. Open standard for packaging and distributing virtual appliances or software to be run on virtual machines. |
1.2. Supported Image service back ends
The following Image service (glance) back-end scenarios are supported:
- RADOS Block Device (RBD) is the default back end when you use Ceph.
- RBD multi-store.
- Object Storage (swift). The Image service uses the Object Storage type and back end as the default.
- Block Storage (cinder).
NFS
- Important
Although NFS is a supported Image service deployment option, more robust options are available.
NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share.
In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end.
However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks:
- Use a reliable production-grade NFS back end.
- Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended.
- Include monitoring and alerts for the mounted share.
- Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store.
- Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt.
1.3. Image signing and verification
Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties.
Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks.
For information on image signing and verification, see Validating Image service (glance) images in the Managing secrets with the Key Manager service guide.
1.4. Image format conversion
You can convert images to a different format by activating the image conversion plugin when you import images to the Image service (glance).
You can activate or deactivate the image conversion plugin based on your Red Hat OpenStack Platform (RHOSP) deployment configuration. The deployer configures the preferred format of images for the deployment.
Internally, the Image service receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially.
You can trigger image conversion only when importing an image. It does not run when uploading an image.
Use the Image service command-line client for image management.
For example:
$ glance image-create-via-import \
--disk-format qcow2 \
--container-format bare \
--name <name> \
--visibility public \
--import-method web-download \
--uri http://server/image.qcow2
-
Replace
<name>
with the name of your image.
1.5. Improving scalability with Image service caching
Use the Image service (glance) API caching mechanism to store copies of images on Image service API servers and retrieve them automatically to improve scalability. With Image service caching, you can run glance-api on multiple hosts. This means that it does not need to retrieve the same image from back-end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabled
parameter totrue
, which automatically sets theflavor
value tokeystone+cachemanagement
in theglance-api.conf
heat template:parameter_defaults: GlanceCacheEnabled: true
-
Include the environment file in the
openstack overcloud deploy
command when you redeploy the overcloud. Optional: Tune the
glance_cache_pruner
to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'
Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
1.6. Image pre-caching
You can use Red Hat OpenStack Platform (RHOSP) director to pre-cache images as part of the glance-api
service.
Use the Image service (glance) command-line client for image management.
1.6.1. Configuring the default interval for periodic image pre-caching
The Image service (glance) pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api
service is running. To change the default time, you can set the cache_prefetcher_interval
parameter under the Default
section in the glance-api.conf environment file.
Procedure
Add a new interval with the
ExtraConfig
parameter in an environment file on the undercloud according to your requirements:parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'
-
Replace
<300>
with the number of seconds that you want as an interval to pre-cache images.
-
Replace
After you adjust the interval in the environment file in
/home/stack/templates/
, log in as thestack
user and deploy the configuration:$ openstack overcloud deploy --templates \ -e /home/stack/templates/<env_file>.yaml
Replace <env_file> with the name of the environment file that contains the
ExtraConfig
settings that you added.ImportantIf you passed any extra environment files when you created the overcloud, pass them again here by using the
-e
option to avoid making undesired changes to the overcloud.
Additional resources
For more information about the openstack overcloud deploy
command, see Deployment command in the Installing and managing Red Hat OpenStack Platform with director guide.
1.6.2. Preparing to use a periodic job to pre-cache an image
To use a periodic job to pre-cache an image, you must use the glance-cache-manage
command connected directly to the node where the glance_api
service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api
service is running, run commands on the first overcloud node, which is called controller-0
by default.
Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage
commands from inside the glance-api
container.
Procedure
Log in to the undercloud as the stack user and identify the provisioning IP address of
controller-0
:(undercloud) [stack@site-undercloud-0 ~]$ openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (undercloud) [stack@site-undercloud-0 ~]$
To authenticate to the overcloud, copy the credentials that are stored in
/home/stack/overcloudrc
, by default, tocontroller-0
:$ scp ~/overcloudrc tripleo-admin@192.168.24.71:/home/tripleo-admin/
Connect to
controller-0
:$ ssh tripleo-admin@192.168.24.71
On
controller-0
as thetripleo-admin
user, identify the IP address of theglance_api service
. In the following example, the IP address is172.25.1.105
:(overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2
Because the
glance-cache-manage
command is only available in theglance_api
container, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script calledglance_pod.sh
in/home/tripleo-admin
oncontroller-0
with the following contents:sudo podman exec -ti \ -e NOVA_VERSION=$NOVA_VERSION \ -e COMPUTE_API_VERSION=$COMPUTE_API_VERSION \ -e OS_USERNAME=$OS_USERNAME \ -e OS_PROJECT_NAME=$OS_PROJECT_NAME \ -e OS_USER_DOMAIN_NAME=$OS_USER_DOMAIN_NAME \ -e OS_PROJECT_DOMAIN_NAME=$OS_PROJECT_DOMAIN_NAME \ -e OS_NO_CACHE=$OS_NO_CACHE \ -e OS_CLOUDNAME=$OS_CLOUDNAME \ -e no_proxy=$no_proxy \ -e OS_AUTH_TYPE=$OS_AUTH_TYPE \ -e OS_PASSWORD=$OS_PASSWORD \ -e OS_AUTH_URL=$OS_AUTH_URL \ -e OS_IDENTITY_API_VERSION=$OS_IDENTITY_API_VERSION \ -e OS_COMPUTE_API_VERSION=$OS_COMPUTE_API_VERSION \ -e OS_IMAGE_API_VERSION=$OS_IMAGE_API_VERSION \ -e OS_VOLUME_API_VERSION=$OS_VOLUME_API_VERSION \ -e OS_REGION_NAME=$OS_REGION_NAME \ glance_api /bin/bash
Source the
overcloudrc
file and run theglance_pod.sh
script to exec into theglance_api
container with the necessary environment variables to authenticate to the overcloud Controller node.[tripleo-admin@controller-0 ~]$ source overcloudrc (overcloudrc) [tripleo-admin@central-controller-0 ~]$ bash glance_pod.sh ()[glance@controller-0 /]$
Use a command such as
glance image-list
to verify that the container can run authenticated commands against the overcloud.()[glance@controller-0 /]$ glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]$
1.6.3. Using a periodic job to pre-cache an image
When you have completed the prerequisite procedure in Section 1.6.2, “Preparing to use a periodic job to pre-cache an image”, you can use a periodic job to pre-cache an image.
Procedure
As the admin user, queue an image to cache:
$ glance-cache-manage --host=<host_ip> queue-image <image_id>
-
Replace <host_ip> with the IP address of the Controller node where the
glance-api
container is running. Replace <image_id> with the ID of the image that you want to queue.
When you have queued the images that you want to pre-cache, the
cache_images
periodic job prefetches all queued images concurrently.NoteBecause the image cache is local to each node, if your Red Hat OpenStack Platform (RHOSP) deployment is HA, with 3, 5, or 7 Controllers, then you must specify the host address with the
--host
option when you run theglance-cache-manage
command.
-
Replace <host_ip> with the IP address of the Controller node where the
Run the following command to view the images in the image cache:
$ glance-cache-manage --host=<host_ip> list-cached
- Replace <host_ip> with the IP address of the host in your environment.
1.6.4. Image caching command options
You can use the following glance-cache-manage
command options to queue images for caching and manage cached images:
-
list-cached
to list all images that are currently cached. -
list-queued
to list all images that are currently queued for caching. -
queue-image
to queue an image for caching. -
delete-cached-image
to purge an image from the cache. -
delete-all-cached-images
to remove all images from the cache. -
delete-queued-image
to delete an image from the cache queue. -
delete-all-queued-images
to delete all images from the cache queue.
1.7. Using the Image service API to enable sparse image upload
With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space.
Use the Image service command-line client for image management.
Limitations
- Sparse image upload is supported only with Ceph RADOS Block Device (RBD).
- Sparse image upload is not supported for file systems.
- Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level.
Prerequisites
- Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end.
Procedure
-
Log in to the undercloud node as the
stack
user. Source the
stackrc
credentials file:$ source stackrc
Create an environment file with the following content:
parameter_defaults: GlanceSparseUploadEnabled: true
Add your new environment file to the stack with your other environment files and deploy the overcloud:
$ openstack overcloud deploy \ --templates \ … -e <existing_overcloud_environment_files> \ -e <new_environment_file>.yaml \ ...
For more information about uploading images, see Uploading images to the Image service.
Verification
You can import an image and check its size to verify sparse image upload.
The following procedure uses example commands. Replace the values with those from your environment where appropriate.
Download the image file locally:
$ wget <file_location>/<file_name>
-
Replace
<file_location>
with the location of the file. Replace
<file_name>
with the name of the file.For example:
$ wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2
-
Replace
Check the disk size and the virtual size of the image to be uploaded:
$ qemu-img info <file_name>
For example:
$ qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2 image: CentOS-6-x86_64-GenericCloud-1508.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 1.09 GiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 1
Import the image:
$ glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>
- Record the image ID. It is required in a subsequent step.
Verify that the image is imported and in an active state:
$ glance image show <image_id>
From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1:
$ sudo rbd -p images diff <image_id> | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }' 1.03906 GB
Optional: You can confirm that
rbd_thin_provisioning
is configured in the Image service configuration file on the Controller nodes:Use SSH to access a Controller node:
$ ssh -A -t tripleo-admin@<controller_node_IP_address>
Confirm that
rbd_thin_provisioning
equalsTrue
on that Controller node:$ sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'
1.8. Secure metadef APIs
In Red Hat OpenStack Platform (RHOSP), cloud administrators can define key value pairs and tag metadata with metadata definition (metadef) APIs. There is no limit on the number of metadef namespaces, objects, properties, resources, or tags that cloud administrators can create.
Image service policies control metadef APIs. By default, only cloud administrators can create, update, or delete (CUD) metadef APIs. This limitation prevents metadef APIs from exposing information to unauthorized users and mitigates the risk of a malicious user filling the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. However, cloud administrators can override the default policy.
1.9. Enabling metadef API access for cloud users
Cloud administrators with users who depend on write access to metadata definition (metadef) APIs can make those APIs accessible to all users by overriding the default admin-only policy. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read-access is enabled for all users.
Procedure
As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example:
$ cat open-up-glance-api-metadef.yaml
Configure the policy override file to allow metadef API read-write access to all users:
GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }
NoteYou must configure all metadef policies to use
rule:metadef_default
. For information about policies and policy syntax, see this Policies chapter.Include the new policy file in the deployment command with the
-e
option when you deploy the overcloud:$ openstack overcloud deploy -e open-up-glance-api-metadef.yaml