Questo contenuto non è disponibile nella lingua selezionata.
Chapter 1. The Image service (glance)
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.
1.1. Virtual machine image formats Copia collegamentoCollegamento copiato negli appunti!
A virtual machine (VM) image is a file that contains a virtual disk with a bootable OS installed. Red Hat OpenStack Platform (RHOSP) supports VM images in different formats.
The disk format of a VM image is the format of the underlying disk image. The container format indicates if the VM image is in a file format that also contains metadata about the VM.
When you add an image to the Image service (glance), you can set the disk or container format for your image to any of the values in the following tables by using the --disk-format and --container-format command options with the glance image-create, glance image-create-via-import, and glance image-update commands. If you are not sure of the container format of your VM image, you can set it to bare.
| Format | Description |
|---|---|
|
| Indicates an Amazon kernel image that is stored in the Image service. |
|
| Indicates an Amazon machine image that is stored in the Image service. |
|
| Indicates an Amazon ramdisk image that is stored in the Image service. |
|
| Sector-by-sector copy of the data on a disk, stored in a binary file. Although an ISO file is not normally considered a VM image format, these files contain bootable file systems with an installed operating system, and you use them in the same way as other VM image files. |
|
| A disk format supported and used by Virtuozzo to run OS containers. |
|
| Supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. |
|
| Unstructured disk image format. |
|
| Supported by VirtualBox VM monitor and QEMU emulator. |
|
| Virtual Hard Disk. Used by VM monitors from VMware, VirtualBox, and others. |
|
| Virtual Hard Disk v2. Disk image format with a larger storage capacity than VHD. |
|
| Virtual Machine Disk. Disk image format that allows incremental backups of data changes from the time of the last backup. |
| Format | Description |
|---|---|
|
| Indicates an Amazon kernel image that is stored in the Image service. |
|
| Indicates an Amazon machine image that is stored in the Image service. |
|
| Indicates an Amazon ramdisk image that is stored in the Image service. |
|
| Indicates there is no container or metadata envelope for the image. |
|
| Indicates a TAR archive of the file system of a Docker container that is stored in the Image service. |
|
| Indicates an Open Virtual Appliance (OVA) TAR archive file that is stored in the Image service. This file is stored in the Open Virtualization Format (OVF) container file. |
|
| OVF container file format. Open standard for packaging and distributing virtual appliances or software to be run on virtual machines. |
1.2. Supported Image service back ends Copia collegamentoCollegamento copiato negli appunti!
The following Image service (glance) back-end scenarios are supported:
- RADOS Block Device (RBD) is the default back end when you use Ceph.
- RBD multi-store.
- Object Storage (swift). The Image service uses the Object Storage type and back end as the default.
- Block Storage (cinder). Each image is stored as a volume (image volume). By default, it is not possible for a user to create multiple instances or volumes from a volume-backed image. But you can configure both the Image service and the Block Storage back end to do this. For more information, see Enabling the creation of multiple instances or volumes from a volume-backed image.
NFS
- Important
Although NFS is a supported Image service deployment option, more robust options are available.
NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share.
In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end.
However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks:
- Use a reliable production-grade NFS back end.
- Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended.
- Include monitoring and alerts for the mounted share.
- Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store.
- Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt.
1.2.1. Enabling the creation of multiple instances or volumes from a volume-backed image Copia collegamentoCollegamento copiato negli appunti!
When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.
When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.
By default, only the Block Storage project administrator can create volume types.
Procedure
Source the overcloud credentials file:
$ source ~/<credentials_file>-
Replace
<credentials_file>with the name of your credentials file, for example,overcloudrc.
-
Replace
Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:
$ cinder type-create glance-multiattach $ cinder type-key glance-multiattach set multiattach="<is> True"If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the
volume_backend_nameproperty to this volume type. You might need to ask your Block Storage administrator for the correctvolume_backend_namefor your multi-attach volume type. For this example, we are usingiscsias the back-end name.$ cinder type-key glance-multiattach set volume_backend_name=iscsi-
To configure the Image service to use this Block Storage multi-attach volume type, you must add the following parameter to the end of the [default_backend]` section of the
glance-api.conffile:cinder_volume_type = glance-multiattach
1.3. Image signing and verification Copia collegamentoCollegamento copiato negli appunti!
Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties.
Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks.
For information on image signing and verification, see Validating Image service (glance) images in the Managing secrets with the Key Manager service guide.
1.4. Image format conversion Copia collegamentoCollegamento copiato negli appunti!
You can convert images to a different format by activating the image conversion plugin when you import images to the Image service (glance).
You can activate or deactivate the image conversion plugin based on your Red Hat OpenStack Platform (RHOSP) deployment configuration. The deployer configures the preferred format of images for the deployment.
Internally, the Image service receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially.
You can trigger image conversion only when importing an image. It does not run when uploading an image.
Use the Image service command-line client for image management.
For example:
$ glance image-create-via-import \
--disk-format qcow2 \
--container-format bare \
--name <name> \
--visibility public \
--import-method web-download \
--uri http://server/image.qcow2
-
Replace
<name>with the name of your image.
1.5. Improving scalability with Image service caching Copia collegamentoCollegamento copiato negli appunti!
Use the Image service (glance) API caching mechanism to store copies of images on Image service API servers and retrieve them automatically to improve scalability. With Image service caching, you can run glance-api on multiple hosts. This means that it does not need to retrieve the same image from back-end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabledparameter totrue, which automatically sets theflavorvalue tokeystone+cachemanagementin theglance-api.confheat template:parameter_defaults: GlanceCacheEnabled: true-
Include the environment file in the
openstack overcloud deploycommand when you redeploy the overcloud. Optional: Tune the
glance_cache_prunerto an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
1.6. Image pre-caching Copia collegamentoCollegamento copiato negli appunti!
You can use Red Hat OpenStack Platform (RHOSP) director to pre-cache images as part of the glance-api service.
Use the Image service (glance) command-line client for image management.
1.6.1. Configuring the default interval for periodic image pre-caching Copia collegamentoCollegamento copiato negli appunti!
The Image service (glance) pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api service is running. To change the default time, you can set the cache_prefetcher_interval parameter under the Default section in the glance-api.conf environment file.
Procedure
Add a new interval with the
ExtraConfigparameter in an environment file on the undercloud according to your requirements:parameter_defaults: ControllerExtraConfig: glance::config::glance_api_config: DEFAULT/cache_prefetcher_interval: value: '<300>'-
Replace
<300>with the number of seconds that you want as an interval to pre-cache images.
-
Replace
After you adjust the interval in the environment file in
/home/stack/templates/, log in as thestackuser and deploy the configuration:$ openstack overcloud deploy --templates \ -e /home/stack/templates/<env_file>.yamlReplace <env_file> with the name of the environment file that contains the
ExtraConfigsettings that you added.ImportantIf you passed any extra environment files when you created the overcloud, pass them again here by using the
-eoption to avoid making undesired changes to the overcloud.
Additional resources
For more information about the openstack overcloud deploy command, see Deployment command in the Installing and managing Red Hat OpenStack Platform with director guide.
1.6.2. Preparing to use a periodic job to pre-cache an image Copia collegamentoCollegamento copiato negli appunti!
To use a periodic job to pre-cache an image, you must use the glance-cache-manage command connected directly to the node where the glance_api service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api service is running, run commands on the first overcloud node, which is called controller-0 by default.
Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage commands from inside the glance-api container.
Procedure
Log in to the undercloud as the stack user and identify the provisioning IP address of
controller-0:(undercloud) [stack@site-undercloud-0 ~]$ openstack server list -f value -c Name -c Networks | grep controller overcloud-controller-1 ctlplane=192.168.24.40 overcloud-controller-2 ctlplane=192.168.24.13 overcloud-controller-0 ctlplane=192.168.24.71 (undercloud) [stack@site-undercloud-0 ~]$To authenticate to the overcloud, copy the credentials that are stored in
/home/stack/overcloudrc, by default, tocontroller-0:$ scp ~/overcloudrc tripleo-admin@192.168.24.71:/home/tripleo-admin/Connect to
controller-0:$ ssh tripleo-admin@192.168.24.71On
controller-0as thetripleo-adminuser, identify the IP address of theglance_api service. In the following example, the IP address is172.25.1.105:(overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg listen glance_api server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2Because the
glance-cache-managecommand is only available in theglance_apicontainer, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script calledglance_pod.shin/home/tripleo-adminoncontroller-0with the following contents:sudo podman exec -ti \ -e NOVA_VERSION=$NOVA_VERSION \ -e COMPUTE_API_VERSION=$COMPUTE_API_VERSION \ -e OS_USERNAME=$OS_USERNAME \ -e OS_PROJECT_NAME=$OS_PROJECT_NAME \ -e OS_USER_DOMAIN_NAME=$OS_USER_DOMAIN_NAME \ -e OS_PROJECT_DOMAIN_NAME=$OS_PROJECT_DOMAIN_NAME \ -e OS_NO_CACHE=$OS_NO_CACHE \ -e OS_CLOUDNAME=$OS_CLOUDNAME \ -e no_proxy=$no_proxy \ -e OS_AUTH_TYPE=$OS_AUTH_TYPE \ -e OS_PASSWORD=$OS_PASSWORD \ -e OS_AUTH_URL=$OS_AUTH_URL \ -e OS_IDENTITY_API_VERSION=$OS_IDENTITY_API_VERSION \ -e OS_COMPUTE_API_VERSION=$OS_COMPUTE_API_VERSION \ -e OS_IMAGE_API_VERSION=$OS_IMAGE_API_VERSION \ -e OS_VOLUME_API_VERSION=$OS_VOLUME_API_VERSION \ -e OS_REGION_NAME=$OS_REGION_NAME \ glance_api /bin/bashSource the
overcloudrcfile and run theglance_pod.shscript to exec into theglance_apicontainer with the necessary environment variables to authenticate to the overcloud Controller node.[tripleo-admin@controller-0 ~]$ source overcloudrc (overcloudrc) [tripleo-admin@central-controller-0 ~]$ bash glance_pod.sh ()[glance@controller-0 /]$Use a command such as
glance image-listto verify that the container can run authenticated commands against the overcloud.()[glance@controller-0 /]$ glance image-list +--------------------------------------+----------------------------------+ | ID | Name | +--------------------------------------+----------------------------------+ | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img | +--------------------------------------+----------------------------------+ ()[glance@controller-0 /]$
1.6.3. Using a periodic job to pre-cache an image Copia collegamentoCollegamento copiato negli appunti!
When you have completed the prerequisite procedure in Section 1.6.2, “Preparing to use a periodic job to pre-cache an image”, you can use a periodic job to pre-cache an image.
Procedure
As the admin user, queue an image to cache:
$ glance-cache-manage --host=<host_ip> queue-image <image_id>-
Replace <host_ip> with the IP address of the Controller node where the
glance-apicontainer is running. Replace <image_id> with the ID of the image that you want to queue.
When you have queued the images that you want to pre-cache, the
cache_imagesperiodic job prefetches all queued images concurrently.NoteBecause the image cache is local to each node, if your Red Hat OpenStack Platform (RHOSP) deployment is HA, with 3, 5, or 7 Controllers, then you must specify the host address with the
--hostoption when you run theglance-cache-managecommand.
-
Replace <host_ip> with the IP address of the Controller node where the
Run the following command to view the images in the image cache:
$ glance-cache-manage --host=<host_ip> list-cached- Replace <host_ip> with the IP address of the host in your environment.
1.6.4. Image caching command options Copia collegamentoCollegamento copiato negli appunti!
You can use the following glance-cache-manage command options to queue images for caching and manage cached images:
-
list-cachedto list all images that are currently cached. -
list-queuedto list all images that are currently queued for caching. -
queue-imageto queue an image for caching. -
delete-cached-imageto purge an image from the cache. -
delete-all-cached-imagesto remove all images from the cache. -
delete-queued-imageto delete an image from the cache queue. -
delete-all-queued-imagesto delete all images from the cache queue.
1.7. Using the Image service API to enable sparse image upload Copia collegamentoCollegamento copiato negli appunti!
With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space.
Use the Image service command-line client for image management.
Limitations
- Sparse image upload is supported only with Ceph RADOS Block Device (RBD).
- Sparse image upload is not supported for file systems.
- Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level.
Prerequisites
- Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end.
Procedure
-
Log in to the undercloud node as the
stackuser. Source the
stackrccredentials file:$ source stackrcCreate an environment file with the following content:
parameter_defaults: GlanceSparseUploadEnabled: trueAdd your new environment file to the stack with your other environment files and deploy the overcloud:
$ openstack overcloud deploy \ --templates \ … -e <existing_overcloud_environment_files> \ -e <new_environment_file>.yaml \ ...
For more information about uploading images, see Uploading images to the Image service.
Verification
You can import an image and check its size to verify sparse image upload.
The following procedure uses example commands. Replace the values with those from your environment where appropriate.
Download the image file locally:
$ wget <file_location>/<file_name>-
Replace
<file_location>with the location of the file. Replace
<file_name>with the name of the file.For example:
$ wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2
-
Replace
Check the disk size and the virtual size of the image to be uploaded:
$ qemu-img info <file_name>For example:
$ qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2 image: CentOS-6-x86_64-GenericCloud-1508.qcow2 file format: qcow2 virtual size: 8 GiB (8589934592 bytes) disk size: 1.09 GiB cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 1Import the image:
$ glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>- Record the image ID. It is required in a subsequent step.
Verify that the image is imported and in an active state:
$ glance image show <image_id>From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1:
$ sudo rbd -p images diff <image_id> | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }' 1.03906 GBOptional: You can confirm that
rbd_thin_provisioningis configured in the Image service configuration file on the Controller nodes:Use SSH to access a Controller node:
$ ssh -A -t tripleo-admin@<controller_node_IP_address>Confirm that
rbd_thin_provisioningequalsTrueon that Controller node:$ sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'
1.8. Secure metadef APIs Copia collegamentoCollegamento copiato negli appunti!
In Red Hat OpenStack Platform (RHOSP), cloud administrators can define key value pairs and tag metadata with metadata definition (metadef) APIs. There is no limit on the number of metadef namespaces, objects, properties, resources, or tags that cloud administrators can create.
Image service policies control metadef APIs. By default, only cloud administrators can create, update, or delete (CUD) metadef APIs. This limitation prevents metadef APIs from exposing information to unauthorized users and mitigates the risk of a malicious user filling the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. However, cloud administrators can override the default policy.
1.9. Enabling metadef API access for cloud users Copia collegamentoCollegamento copiato negli appunti!
Cloud administrators with users who depend on write access to metadata definition (metadef) APIs can make those APIs accessible to all users by overriding the default admin-only policy. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read-access is enabled for all users.
Procedure
As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example:
$ cat open-up-glance-api-metadef.yamlConfigure the policy override file to allow metadef API read-write access to all users:
GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }NoteYou must configure all metadef policies to use
rule:metadef_default. For information about policies and policy syntax, see this Policies chapter.Include the new policy file in the deployment command with the
-eoption when you deploy the overcloud:$ openstack overcloud deploy -e open-up-glance-api-metadef.yaml