Este conteúdo não está disponível no idioma selecionado.

Chapter 1. The Image service (glance)


The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.

1.1. Virtual machine image formats

A virtual machine (VM) image is a file that contains a virtual disk with a bootable OS installed. Red Hat OpenStack Platform (RHOSP) supports VM images in different formats.

The disk format of a VM image is the format of the underlying disk image. The container format indicates if the VM image is in a file format that also contains metadata about the VM.

When you add an image to the Image service (glance), you can set the disk or container format for your image to any of the values in the following tables by using the --disk-format and --container-format command options with the glance image-create, glance image-create-via-import, and glance image-update commands. If you are not sure of the container format of your VM image, you can set it to bare.

Expand
Table 1.1. Disk image formats
FormatDescription

aki

Indicates an Amazon kernel image that is stored in the Image service.

ami

Indicates an Amazon machine image that is stored in the Image service.

ari

Indicates an Amazon ramdisk image that is stored in the Image service.

iso

Sector-by-sector copy of the data on a disk, stored in a binary file. Although an ISO file is not normally considered a VM image format, these files contain bootable file systems with an installed operating system, and you use them in the same way as other VM image files.

ploop

A disk format supported and used by Virtuozzo to run OS containers.

qcow2

Supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher.

raw

Unstructured disk image format.

vdi

Supported by VirtualBox VM monitor and QEMU emulator.

vhd

Virtual Hard Disk. Used by VM monitors from VMware, VirtualBox, and others.

vhdx

Virtual Hard Disk v2. Disk image format with a larger storage capacity than VHD.

vmdk

Virtual Machine Disk. Disk image format that allows incremental backups of data changes from the time of the last backup.

Expand
Table 1.2. Container image formats
FormatDescription

aki

Indicates an Amazon kernel image that is stored in the Image service.

ami

Indicates an Amazon machine image that is stored in the Image service.

ari

Indicates an Amazon ramdisk image that is stored in the Image service.

bare

Indicates there is no container or metadata envelope for the image.

docker

Indicates a TAR archive of the file system of a Docker container that is stored in the Image service.

ova

Indicates an Open Virtual Appliance (OVA) TAR archive file that is stored in the Image service. This file is stored in the Open Virtualization Format (OVF) container file.

ovf

OVF container file format. Open standard for packaging and distributing virtual appliances or software to be run on virtual machines.

1.2. Supported Image service back ends

The following Image service (glance) back-end scenarios are supported:

  • RADOS Block Device (RBD) is the default back end when you use Ceph.
  • RBD multi-store.
  • Object Storage (swift). The Image service uses the Object Storage type and back end as the default.
  • Block Storage (cinder). Each image is stored as a volume (image volume). By default, it is not possible for a user to create multiple instances or volumes from a volume-backed image. But you can configure both the Image service and the Block Storage back end to do this. For more information, see Enabling the creation of multiple instances or volumes from a volume-backed image.
  • NFS

    Important

    Although NFS is a supported Image service deployment option, more robust options are available.

    NFS is not native to the Image service. When you mount an NFS share on the Image service, the Image service does not manage the operation. The Image service writes data to the file system but is unaware that the back end is an NFS share.

    In this type of deployment, the Image service cannot retry a request if the share fails. This means that when a failure occurs on the back end, the store might enter read-only mode, or it might continue to write data to the local file system, in which case you risk data loss. To recover from this situation, you must ensure that the share is mounted and in sync, and then restart the Image service. For these reasons, Red Hat does not recommend NFS as an Image service back end.

    However, if you do choose to use NFS as an Image service back end, some of the following best practices can help to mitigate risks:

    • Use a reliable production-grade NFS back end.
    • Ensure that you have a strong and reliable connection between Controller nodes and the NFS back end: Layer 2 (L2) network connectivity is recommended.
    • Include monitoring and alerts for the mounted share.
    • Set underlying file system permissions. Write permissions must be present in the shared file system that you use as a store.
    • Ensure that the user and the group that the glance-api process runs on do not have write permissions on the mount point at the local file system. This means that the process can detect possible mount failure and put the store into read-only mode during a write attempt.

When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.

When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.

Note

By default, only the Block Storage project administrator can create volume types.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    Copy to Clipboard Toggle word wrap
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:

    $ cinder type-create glance-multiattach
    $ cinder type-key glance-multiattach set multiattach="<is> True"
    Copy to Clipboard Toggle word wrap

    If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name.

    $ cinder type-key glance-multiattach set volume_backend_name=iscsi
    Copy to Clipboard Toggle word wrap
  3. To configure the Image service to use this Block Storage multi-attach volume type, you must add the following parameter to the end of the [default_backend]` section of the glance-api.conf file: cinder_volume_type = glance-multiattach

1.3. Image signing and verification

Image signing and verification protects image integrity and authenticity by enabling deployers to sign images and save the signatures and public key certificates as image properties.

Note

Image signing and verification is not supported if Nova is using RADOS Block Device (RBD) to store virtual machines disks.

For information on image signing and verification, see Validating Image service (glance) images in the Managing secrets with the Key Manager service guide.

1.4. Image format conversion

You can convert images to a different format by activating the image conversion plugin when you import images to the Image service (glance).

You can activate or deactivate the image conversion plugin based on your Red Hat OpenStack Platform (RHOSP) deployment configuration. The deployer configures the preferred format of images for the deployment.

Internally, the Image service receives the bits of the image in a particular format and stores the bits in a temporary location. The Image service triggers the plugin to convert the image to the target format and move the image to a final destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that was uploaded initially.

You can trigger image conversion only when importing an image. It does not run when uploading an image.

Use the Image service command-line client for image management.

For example:

$ glance image-create-via-import \
    --disk-format qcow2 \
    --container-format bare \
    --name <name> \
    --visibility public \
    --import-method web-download \
    --uri http://server/image.qcow2
Copy to Clipboard Toggle word wrap
  • Replace <name> with the name of your image.

1.5. Improving scalability with Image service caching

Use the Image service (glance) API caching mechanism to store copies of images on Image service API servers and retrieve them automatically to improve scalability. With Image service caching, you can run glance-api on multiple hosts. This means that it does not need to retrieve the same image from back-end storage multiple times. Image service caching does not affect any Image service operations.

Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:

Procedure

  1. In an environment file, set the value of the GlanceCacheEnabled parameter to true, which automatically sets the flavor value to keystone+cachemanagement in the glance-api.conf heat template:

    parameter_defaults:
        GlanceCacheEnabled: true
    Copy to Clipboard Toggle word wrap
  2. Include the environment file in the openstack overcloud deploy command when you redeploy the overcloud.
  3. Optional: Tune the glance_cache_pruner to an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:

    parameter_defaults:
      ControllerExtraConfig:
        glance::cache::pruner::minute: '*/5'
    Copy to Clipboard Toggle word wrap

    Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:

    • The size of the files that you want to cache in your environment.
    • The amount of available file system space.
    • The frequency at which the environment caches images.

1.6. Image pre-caching

You can use Red Hat OpenStack Platform (RHOSP) director to pre-cache images as part of the glance-api service.

Use the Image service (glance) command-line client for image management.

1.6.1. Configuring the default interval for periodic image pre-caching

The Image service (glance) pre-caching periodic job runs every 300 seconds (5 minutes default time) on each controller node where the glance-api service is running. To change the default time, you can set the cache_prefetcher_interval parameter under the Default section in the glance-api.conf environment file.

Procedure

  1. Add a new interval with the ExtraConfig parameter in an environment file on the undercloud according to your requirements:

    parameter_defaults:
      ControllerExtraConfig:
        glance::config::glance_api_config:
          DEFAULT/cache_prefetcher_interval:
            value: '<300>'
    Copy to Clipboard Toggle word wrap
    • Replace <300> with the number of seconds that you want as an interval to pre-cache images.
  2. After you adjust the interval in the environment file in /home/stack/templates/, log in as the stack user and deploy the configuration:

    $ openstack overcloud deploy --templates \
    -e /home/stack/templates/<env_file>.yaml
    Copy to Clipboard Toggle word wrap
    • Replace <env_file> with the name of the environment file that contains the ExtraConfig settings that you added.

      Important

      If you passed any extra environment files when you created the overcloud, pass them again here by using the -e option to avoid making undesired changes to the overcloud.

Additional resources

For more information about the openstack overcloud deploy command, see Deployment command in the Installing and managing Red Hat OpenStack Platform with director guide.

1.6.2. Preparing to use a periodic job to pre-cache an image

To use a periodic job to pre-cache an image, you must use the glance-cache-manage command connected directly to the node where the glance_api service is running. Do not use a proxy, which hides the node that answers a service request. Because the undercloud might not have access to the network where the glance_api service is running, run commands on the first overcloud node, which is called controller-0 by default.

Complete the following prerequisite procedure to ensure that you run commands from the correct host, have the necessary credentials, and are also running the glance-cache-manage commands from inside the glance-api container.

Procedure

  1. Log in to the undercloud as the stack user and identify the provisioning IP address of controller-0:

    (undercloud) [stack@site-undercloud-0 ~]$ openstack server list -f value -c Name -c Networks | grep controller
    overcloud-controller-1 ctlplane=192.168.24.40
    overcloud-controller-2 ctlplane=192.168.24.13
    overcloud-controller-0 ctlplane=192.168.24.71
    (undercloud) [stack@site-undercloud-0 ~]$
    Copy to Clipboard Toggle word wrap
  2. To authenticate to the overcloud, copy the credentials that are stored in /home/stack/overcloudrc, by default, to controller-0:

    $ scp ~/overcloudrc tripleo-admin@192.168.24.71:/home/tripleo-admin/
    Copy to Clipboard Toggle word wrap
  3. Connect to controller-0:

    $ ssh tripleo-admin@192.168.24.71
    Copy to Clipboard Toggle word wrap
  4. On controller-0 as the tripleo-admin user, identify the IP address of the glance_api service. In the following example, the IP address is 172.25.1.105:

    (overcloud) [root@controller-0 ~]# grep -A 10 '^listen glance_api' /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    listen glance_api
     server central-controller0-0.internalapi.redhat.local 172.25.1.105:9292 check fall 5 inter 2000 rise 2
    Copy to Clipboard Toggle word wrap
  5. Because the glance-cache-manage command is only available in the glance_api container, create a script to exec into that container where the environment variables to authenticate to the overcloud are already set. Create a script called glance_pod.sh in /home/tripleo-admin on controller-0 with the following contents:

    sudo podman exec -ti \
     -e NOVA_VERSION=$NOVA_VERSION \
     -e COMPUTE_API_VERSION=$COMPUTE_API_VERSION \
     -e OS_USERNAME=$OS_USERNAME \
     -e OS_PROJECT_NAME=$OS_PROJECT_NAME \
     -e OS_USER_DOMAIN_NAME=$OS_USER_DOMAIN_NAME \
     -e OS_PROJECT_DOMAIN_NAME=$OS_PROJECT_DOMAIN_NAME \
     -e OS_NO_CACHE=$OS_NO_CACHE \
     -e OS_CLOUDNAME=$OS_CLOUDNAME \
     -e no_proxy=$no_proxy \
     -e OS_AUTH_TYPE=$OS_AUTH_TYPE \
     -e OS_PASSWORD=$OS_PASSWORD \
     -e OS_AUTH_URL=$OS_AUTH_URL \
     -e OS_IDENTITY_API_VERSION=$OS_IDENTITY_API_VERSION \
     -e OS_COMPUTE_API_VERSION=$OS_COMPUTE_API_VERSION \
     -e OS_IMAGE_API_VERSION=$OS_IMAGE_API_VERSION \
     -e OS_VOLUME_API_VERSION=$OS_VOLUME_API_VERSION \
     -e OS_REGION_NAME=$OS_REGION_NAME \
    glance_api /bin/bash
    Copy to Clipboard Toggle word wrap
  6. Source the overcloudrc file and run the glance_pod.sh script to exec into the glance_api container with the necessary environment variables to authenticate to the overcloud Controller node.

    [tripleo-admin@controller-0 ~]$ source overcloudrc
    (overcloudrc) [tripleo-admin@central-controller-0 ~]$ bash glance_pod.sh
    ()[glance@controller-0 /]$
    Copy to Clipboard Toggle word wrap
  7. Use a command such as glance image-list to verify that the container can run authenticated commands against the overcloud.

    ()[glance@controller-0 /]$ glance image-list
    +--------------------------------------+----------------------------------+
    | ID                                   | Name                             |
    +--------------------------------------+----------------------------------+
    | ad2f8daf-56f3-4e10-b5dc-d28d3a81f659 | cirros-0.4.0-x86_64-disk.img       |
    +--------------------------------------+----------------------------------+
    ()[glance@controller-0 /]$
    Copy to Clipboard Toggle word wrap

1.6.3. Using a periodic job to pre-cache an image

When you have completed the prerequisite procedure in Section 1.6.2, “Preparing to use a periodic job to pre-cache an image”, you can use a periodic job to pre-cache an image.

Procedure

  1. As the admin user, queue an image to cache:

    $ glance-cache-manage --host=<host_ip> queue-image <image_id>
    Copy to Clipboard Toggle word wrap
    • Replace <host_ip> with the IP address of the Controller node where the glance-api container is running.
    • Replace <image_id> with the ID of the image that you want to queue.

      When you have queued the images that you want to pre-cache, the cache_images periodic job prefetches all queued images concurrently.

      Note

      Because the image cache is local to each node, if your Red Hat OpenStack Platform (RHOSP) deployment is HA, with 3, 5, or 7 Controllers, then you must specify the host address with the --host option when you run the glance-cache-manage command.

  2. Run the following command to view the images in the image cache:

    $ glance-cache-manage --host=<host_ip> list-cached
    Copy to Clipboard Toggle word wrap
    • Replace <host_ip> with the IP address of the host in your environment.

1.6.4. Image caching command options

You can use the following glance-cache-manage command options to queue images for caching and manage cached images:

  • list-cached to list all images that are currently cached.
  • list-queued to list all images that are currently queued for caching.
  • queue-image to queue an image for caching.
  • delete-cached-image to purge an image from the cache.
  • delete-all-cached-images to remove all images from the cache.
  • delete-queued-image to delete an image from the cache queue.
  • delete-all-queued-images to delete all images from the cache queue.

1.7. Using the Image service API to enable sparse image upload

With the Image service (glance) API, you can use sparse image upload to reduce network traffic and save storage space. This feature is particularly useful in distributed compute node (DCN) environments. With a sparse image file, the Image service does not write null byte sequences. The Image service writes data with a given offset. Storage back ends interpret these offsets as null bytes that do not actually consume storage space.

Use the Image service command-line client for image management.

Limitations

  • Sparse image upload is supported only with Ceph RADOS Block Device (RBD).
  • Sparse image upload is not supported for file systems.
  • Sparseness is not maintained during the transfer between the client and the Image service API. The image is sparsed at the Image service API level.

Prerequisites

  • Your Red Hat OpenStack Platform (RHOSP) deployment uses RBD for the Image service back end.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Source the stackrc credentials file:

    $ source stackrc
    Copy to Clipboard Toggle word wrap
  3. Create an environment file with the following content:

    parameter_defaults:
        GlanceSparseUploadEnabled: true
    Copy to Clipboard Toggle word wrap
  4. Add your new environment file to the stack with your other environment files and deploy the overcloud:

    $ openstack overcloud deploy \
        --templates \
        …
        -e <existing_overcloud_environment_files> \
        -e <new_environment_file>.yaml \
        ...
    Copy to Clipboard Toggle word wrap

For more information about uploading images, see Uploading images to the Image service.

Verification

You can import an image and check its size to verify sparse image upload.

The following procedure uses example commands. Replace the values with those from your environment where appropriate.

  1. Download the image file locally:

    $ wget <file_location>/<file_name>
    Copy to Clipboard Toggle word wrap
    • Replace <file_location> with the location of the file.
    • Replace <file_name> with the name of the file.

      For example:

      $ wget https://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud-1508.qcow2
      Copy to Clipboard Toggle word wrap
  2. Check the disk size and the virtual size of the image to be uploaded:

    $ qemu-img info <file_name>
    Copy to Clipboard Toggle word wrap

    For example:

    $ qemu-img info CentOS-6-x86_64-GenericCloud-1508.qcow2
    
    image: CentOS-6-x86_64-GenericCloud-1508.qcow2
    file format: qcow2
    virtual size: 8 GiB (8589934592 bytes)
    disk size: 1.09 GiB
    cluster_size: 65536
    Format specific information:
    compat: 0.10
    refcount bits: 1
    Copy to Clipboard Toggle word wrap
  3. Import the image:

    $ glance image-create-via-import --disk-format qcow2 --container-format bare --name centos_1 --file <file_name>
    Copy to Clipboard Toggle word wrap
  4. Record the image ID. It is required in a subsequent step.
  5. Verify that the image is imported and in an active state:

    $ glance image show <image_id>
    Copy to Clipboard Toggle word wrap
  6. From a Ceph Storage node, verify that the size of the image is less than the virtual size from the output of step 1:

    $ sudo rbd -p images diff <image_id> | awk '{ SUM += $2 } END { print SUM/1024/1024/1024 " GB" }'
    
    1.03906 GB
    Copy to Clipboard Toggle word wrap
  7. Optional: You can confirm that rbd_thin_provisioning is configured in the Image service configuration file on the Controller nodes:

    1. Use SSH to access a Controller node:

      $ ssh -A -t tripleo-admin@<controller_node_IP_address>
      Copy to Clipboard Toggle word wrap
    2. Confirm that rbd_thin_provisioning equals True on that Controller node:

      $ sudo podman exec -it glance_api sh -c 'grep ^rbd_thin_provisioning /etc/glance/glance-api.conf'
      Copy to Clipboard Toggle word wrap

1.8. Secure metadef APIs

In Red Hat OpenStack Platform (RHOSP), cloud administrators can define key value pairs and tag metadata with metadata definition (metadef) APIs. There is no limit on the number of metadef namespaces, objects, properties, resources, or tags that cloud administrators can create.

Image service policies control metadef APIs. By default, only cloud administrators can create, update, or delete (CUD) metadef APIs. This limitation prevents metadef APIs from exposing information to unauthorized users and mitigates the risk of a malicious user filling the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. However, cloud administrators can override the default policy.

1.9. Enabling metadef API access for cloud users

Cloud administrators with users who depend on write access to metadata definition (metadef) APIs can make those APIs accessible to all users by overriding the default admin-only policy. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read-access is enabled for all users.

Procedure

  1. As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example:

    $ cat open-up-glance-api-metadef.yaml
    Copy to Clipboard Toggle word wrap
  2. Configure the policy override file to allow metadef API read-write access to all users:

    GlanceApiPolicies: {
        glance-metadef_default: { key: 'metadef_default', value: '' },
        glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' },
        glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' },
        glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' },
        glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' },
        glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' },
        glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' },
        glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' },
        glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' },
        glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' },
        glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' },
        glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' },
        glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' },
        glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' },
        glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' },
        glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' },
        glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' },
        glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' },
        glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' },
        glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' },
        glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' },
        glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' },
        glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' },
        glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' },
        glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' },
        glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' },
        glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' }
      }
    Copy to Clipboard Toggle word wrap
    Note

    You must configure all metadef policies to use rule:metadef_default. For information about policies and policy syntax, see this Policies chapter.

  3. Include the new policy file in the deployment command with the -e option when you deploy the overcloud:

    $ openstack overcloud deploy -e open-up-glance-api-metadef.yaml
    Copy to Clipboard Toggle word wrap
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat