Chapter 1. The Image service (glance)
Manage images and storage in Red Hat OpenStack Platform (RHOSP).
1.1. Virtual Machine (VM) image formats
A VM image is a file that contains a virtual disk with a bootable operating system installed. VM images are supported in different formats. The following formats are available in Red Hat OpenStack Platform (RHOSP):
-
RAW
- Unstructured disk image format. -
QCOW2
- Disk format supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. -
ISO
- Sector-by-sector copy of the data on a disk, stored in a binary file. -
AKI
- Indicates an Amazon Kernel Image. -
AMI
- Indicates an Amazon Machine Image. -
ARI
- Indicates an Amazon RAMDisk Image. -
VDI
- Disk format supported by VirtualBox VM monitor and the QEMU emulator. -
VHD
- Common disk format used by VM monitors from VMware, VirtualBox, and others. -
PLOOP
- A disk format supported and used by Virtuozzo to run OS containers. -
OVA
- Indicates that what is stored in the Image service (glance) is an OVA tar archive file. -
DOCKER
- Indicates that what is stored in the Image service (glance) is a Docker tar archive of the container file system.
Although ISO
is not normally considered a VM image format, because ISOs contain bootable file systems with an installed operating system, you use them in the same way as other VM image files.
1.2. Supported Image service back ends
The following Image service (glance) back-end scenarios are supported:
- RADOS Block Device (RBD) is the default back end when you use Ceph.
- RBD multi-store.
- Object Storage (swift). The Image service uses the Object Storage type and back end as the default.
- Block Storage (cinder).
NFS
- Important
Although NFS is a supported Image service deployment option, more robust options are available.
NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share.
In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end.
However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks:
- Use a reliable production-grade NFS back end.
- Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended.
- Include monitoring and alerts for the mounted share.
- Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store.
- Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt.
1.3. Image signing and verification
Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties.
Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks.
For information on image signing and verification, see Validating Image service (glance) images in the Manage Secrets with OpenStack Key Manager guide.
1.4. Image conversion
Image conversion converts images by calling the task API while importing an image.
As part of the import workflow, a plugin provides the image conversion. You can activate or deactivate this plugin based on the deployment configuration. The deployer needs to specify the preferred format of images for the deployment.
Internally, the Image service (glance) receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially.
You can trigger image conversion only when importing an image. It does not run when uploading an image.
Use the Image service command-line client for image management.
For example:
$ glance image-create-via-import \ --disk-format qcow2 \ --container-format bare \ --name NAME \ --visibility public \ --import-method web-download \ --uri http://server/image.qcow2
1.5. Interoperable image import
The interoperable image import workflow enables you to import images in two ways:
-
Use the
web-download
(default) method to import images from a URI. -
Use the
glance-direct
method to import images from a local file system. -
Use the
copy-image
method to copy an existing image to other Image service (glance) back ends that are in your deployment. Use this import method only if multiple Image service back ends are enabled in your deployment.
1.6. Improving scalability with Image service caching
Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabled
parameter totrue
, which automatically sets theflavor
value tokeystone+cachemanagement
in theglance-api.conf
heat template:parameter_defaults: GlanceCacheEnabled: true
-
Include the environment file in the
openstack overcloud deploy
command when you redeploy the overcloud. Optional: Tune the
glance_cache_pruner
to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'
Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
1.7. Image pre-caching
Red Hat OpenStack Platform (RHOSP) director can pre-cache images as part of the glance-api
service.
Use the Image service (glance) command-line client for image management.
1.7.1. Configuring the default interval for periodic image pre-caching
Red Hat OpenStack Platform (RHOSP) director can pre-cache images as part of the glance-api
service.
The pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api
service is running. To change the default time, you can set the cache_prefetcher_interval
parameter under the Default
section in glance-api.conf.
Procedure
Add a new interval with the
ExtraConfig
parameter in an environment file on the undercloud according to your requirements:parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'
Replace <300> with the number of seconds that you want as an interval to pre-cache images.
After you adjust the interval in the environment file in
/home/stack/templates/
, log in as thestack
user and deploy the configuration:$ openstack overcloud deploy --templates \ -e /home/stack/templates/<env_file>.yaml
Replace <env_file> with the name of the environment file that contains the
ExtraConfig
settings that you added.ImportantIf you passed any extra environment files when you created the overcloud, pass them again here by using the
-e
option to avoid making undesired changes to the overcloud.
For more information about the openstack overcloud deploy
command, see Deployment command in the Director Installation and Usage guide.
1.7.2. Using a periodic job to pre-cache an image
Use a periodic job to pre-cache an image.
Prerequisites
To use a periodic job to pre-cache an image, you must use the glance-cache-manage
command connected directly to the node where the glance_api
service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api
service is running, run commands on the first overcloud node, which is called controller-0
by default.
Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage
commands from inside the glance-api
container.
Procedure
Log in to the undercloud as the stack user and identify the provisioning IP address of
controller-0
:(undercloud) [stack@site-undercloud-0 ~]$ openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (undercloud) [stack@site-undercloud-0 ~]$
To authenticate to the overcloud, copy the credentials that are stored in
/home/stack/overcloudrc
, by default, tocontroller-0
:$ scp ~/overcloudrc tripleo-admin@192.168.24.71:/home/tripleo-admin/
Connect to
controller-0
:$ ssh tripleo-admin@192.168.24.71
On
controller-0
as thetripleo-admin
user, identify the IP address of theglance_api service
. In the following example, the IP address is172.25.1.105
:(overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2
Because the
glance-cache-manage
command is only available in theglance_api
container, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script calledglance_pod.sh
in/home/tripleo-admin
oncontroller-0
with the following contents:sudo podman exec -ti \ -e NOVA_VERSION=$NOVA_VERSION \ -e COMPUTE_API_VERSION=$COMPUTE_API_VERSION \ -e OS_USERNAME=$OS_USERNAME \ -e OS_PROJECT_NAME=$OS_PROJECT_NAME \ -e OS_USER_DOMAIN_NAME=$OS_USER_DOMAIN_NAME \ -e OS_PROJECT_DOMAIN_NAME=$OS_PROJECT_DOMAIN_NAME \ -e OS_NO_CACHE=$OS_NO_CACHE \ -e OS_CLOUDNAME=$OS_CLOUDNAME \ -e no_proxy=$no_proxy \ -e OS_AUTH_TYPE=$OS_AUTH_TYPE \ -e OS_PASSWORD=$OS_PASSWORD \ -e OS_AUTH_URL=$OS_AUTH_URL \ -e OS_IDENTITY_API_VERSION=$OS_IDENTITY_API_VERSION \ -e OS_COMPUTE_API_VERSION=$OS_COMPUTE_API_VERSION \ -e OS_IMAGE_API_VERSION=$OS_IMAGE_API_VERSION \ -e OS_VOLUME_API_VERSION=$OS_VOLUME_API_VERSION \ -e OS_REGION_NAME=$OS_REGION_NAME \ glance_api /bin/bash
Source the
overcloudrc
file and run theglance_pod.sh
script to exec into theglance_api
container with the necessary environment variables to authenticate to the overcloud Controller node.[tripleo-admin@controller-0 ~]$ source overcloudrc (overcloudrc) [tripleo-admin@central-controller-0 ~]$ bash glance_pod.sh ()[glance@controller-0 /]$
Use a command such as
glance image-list
to verify that the container can run authenticated commands against the overcloud.()[glance@controller-0 /]$ glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]$
Procedure
As the admin user, queue an image to cache:
$ glance-cache-manage --host=<host_ip> queue-image <image_id>
-
Replace <host_ip> with the IP address of the Controller node where the
glance-api
container is running. Replace <image_id> with the ID of the image that you want to queue.
When you have queued the images that you want to pre-cache, the
cache_images
periodic job prefetches all queued images concurrently.NoteBecause the image cache is local to each node, if your Red Hat OpenStack Platform is deployed with HA (with 3, 5, or 7 Controllers) then you must specify the host address with the
--host
option when you run theglance-cache-manage
command.
-
Replace <host_ip> with the IP address of the Controller node where the
Run the following command to view the images in the image cache:
$ glance-cache-manage --host=<host_ip> list-cached
Replace <host_ip> with the IP address of the host in your environment.
Related information
You can use additional glance-cache-manage
commands for the following purposes:
-
list-cached
to list all images that are currently cached. -
list-queued
to list all images that are currently queued for caching. -
queue-image
to queue an image for caching. -
delete-cached-image
to purge an image from the cache. -
delete-all-cached-images
to remove all images from the cache. -
delete-queued-image
to delete an image from the cache queue. -
delete-all-queued-images
to delete all images from the cache queue.
1.8. Using the Image service API to enable sparse image upload
With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space.
Use the Image service command-line client for image management.
Limitations
- Sparse image upload is supported only with Ceph RADOS Block Device (RBD).
- Sparse image upload is not supported for file systems.
- Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level.
Prerequisites
- Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end.
Procedure
-
Log in to the undercloud node as the
stack
user. Source the
stackrc
credentials file:$ source stackrc
Create an environment file with the following content:
parameter_defaults: GlanceSparseUploadEnabled: true
Add your new environment file to the stack with your other environment files and deploy the overcloud:
$ openstack overcloud deploy \ --templates \ … -e <existing_overcloud_environment_files> \ -e <new_environment_file>.yaml \ ...
For more information about uploading images, see Uploading an image.
Verification
You can import an image and check its size to verify sparse image upload.
The following procedure uses example commands. Replace the values with those from your environment where appropriate.
Download the image file locally:
$ wget <file_location>/<file_name>
-
Replace
<file_location>
with the location of the file. Replace
<file_name>
with the name of the file.For example:
$ wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2
-
Replace
Check the disk size and the virtual size of the image to be uploaded:
$ qemu-img info <file_name>
For example:
$ qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2 image: CentOS-6-x86_64-GenericCloud-1508.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 1.09 GiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 1
Import the image:
$ glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>
- Record the image ID. It is required in a subsequent step.
Verify that the image is imported and in an active state:
$ glance image show <image_id>
From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1:
$ sudo rbd -p images diff <image_id> | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }' 1.03906 GB
Optional: You can confirm that
rbd_thin_provisioning
is configured in the Image service configuration file on the Controller nodes:Use SSH to access a Controller node:
$ ssh -A -t tripleo-admin@<controller_node_IP_address>
Confirm that
rbd_thin_provisioning
equalsTrue
on that Controller node:$ sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'
1.9. Secure metadef APIs
In Red Hat OpenStack Platform (RHOSP), users can define key value pairs and tag metadata with metadata definition (metadef) APIs. Currently, there is no limit on the number of metadef namespaces, objects, properties, resources, or tags that users can create.
Metadef APIs can leak information to unauthorized users. A malicious user can exploit the lack of restrictions and fill the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack.
Image service policies control metadef APIs. However, the default policy setting for metadef APIs allows all users to create or read the metadef information. Because metadef resources are not isolated to the owner, metadef resources with potentially sensitive names, such as internal infrastructure details or customer names, can expose that information to malicious users.
1.9.1. Configuring a policy to restrict metadef APIs
To make the Image service (glance) more secure, restrict metadef modification APIs to admin-only access by default in your Red Hat OpenStack Platform (RHOSP) deployments.
Procedure
As a cloud administrator, create a separate heat template environment file, such as
lock-down-glance-metadef-api.yaml
, to contain policy overrides for the Image service metadef API:... parameter_defaults: GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-metadef_admin: { key: 'metadef_admin', value: 'role:admin' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_admin' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_admin' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_admin' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_admin' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_admin' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_admin' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_admin' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_admin' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_admin' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_admin' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_admin' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_admin' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_admin' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_admin' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_admin' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_admin' } } …
Include the environment file that contains the policy overrides in the deployment command with the
-e
option when you deploy the overcloud:$ openstack overcloud deploy -e lock-down-glance-metadef-api.yaml
1.9.2. Enabling metadef APIs
If you previously restricted metadata definition (metadef) APIs or want to relax the new defaults, you can override metadef modification policies to allow users to update their respective resources.
Cloud administrators with users who depend on write access to the metadef APIs can make those APIs accessible to all users. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read access is enabled for all users.
Procedure
As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example:
$ cat open-up-glance-api-metadef.yaml
Configure the policy override file to allow metadef API read-write access to all users:
GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }
NoteYou must configure all metadef policies to use
rule:metadeta_default
.Include the new policy file in the deployment command with the
-e
option when you deploy the overcloud:$ openstack overcloud deploy -e open-up-glance-api-metadef.yaml