Chapter 6. Deploying RHEL bootc images
Deploy Red Hat Enterprise Linux bootc images to provision your operating system on physical hardware, virtual machines, or cloud platforms. By using container-native images for deployment, you can standardize system configurations and simplify lifecycle management across your hybrid infrastructure.
6.1. Available methods for deploying RHEL bootc images Copy linkLink copied to clipboard!
Identify the supported installation paths for Red Hat Enterprise Linux bootc images to determine the optimal strategy for your infrastructure. Selecting the appropriate deployment method ensures you can successfully provision containerized operating systems across physical, virtual, or cloud environments.
You can choose the installation method that best fits your infrastructure and deployment requirements.
Anaconda- You can use RHEL installer and Kickstart to install the layered image directly to bare metal or virtual machines by using Anaconda with Kickstart automation. This does not required a customized ISO image.- Anaconda installation - Suitable for bare metal, virtual machines, and cloud instances deployment.
PXE - You can use existing Anaconda PXE boot environments with modified Kickstart configurations.
For more information, see Deploying a container image from the network by using Anaconda and Kickstart
bootc-image-builder- You can usebootc-image-builderto convert container images to a bootc image, or create pre-configured disk images, and deploy them to a bare metal or to a cloud environment. The following bootc image types are available:-
ISO: Unattended installation method, by using a USB drive or
Install-on-boot. - QCOW2 (QEMU copy-on-write, virtual disk)
-
Raw (
.dmg) - AMI (Amazon Cloud)
-
ISO: Unattended installation method, by using a USB drive or
-
bootc install- You can usebootc installto install a bootc image onto a target system. Thebootc installhandles tasks such as partitioning, setting up the boot loader, and extracting the content of the image to make it bootable.
The installation method happens only one time. After you deploy your image, any future updates will apply directly from the container registry as the updates are published.
Figure 6.1. Deploying a bootc image by using a basic build installer bootc install, or deploying a container image by using Anaconda and Kickstart
Figure 6.2. Using bootc-image-builder to create disk images from bootc images and deploying disk images in different environments, such as the edge, servers, and clouds, by using Anaconda, bootc-image-builder, or bootc install
6.2. Deploying an ISO bootc image over PXE boot Copy linkLink copied to clipboard!
Provision bare metal systems by deploying Red Hat Enterprise Linux bootc ISO images by using PXE boot. By using network-based deployment, you can install bootable container images efficiently without requiring physical media for every machine.
Prerequisites
- You have downloaded a RHEL 10 Boot ISO for your architecture from Red Hat. See Downloading RHEL boot images.
You have configured the server for the PXE boot. Choose one of the following options:
- For HTTP clients, see Configuring the DHCPv4 server for HTTP and PXE boot.
- For UEFI-based clients, see Configuring a TFTP server for UEFI-based clients.
- For BIOS-based clients, see Configuring a TFTP server for BIOS-based clients.
- You have a client, also known as the system to which you are installing your ISO image.
Procedure
- Export the RHEL installation ISO image to the HTTP server. The PXE boot server is now ready to serve PXE clients.
- Boot the client and start the installation.
- Select PXE Boot when prompted to specify a boot source. If the boot options are not displayed, press the Enter key on your keyboard or wait until the boot window opens.
- From the Red Hat Enterprise Linux boot window, select the boot option that you want, and press Enter.
- Start the network installation.
Next steps
- You can push an updated version of this container image to the registry to deliver operating system updates to your running systems. See Managing RHEL bootc images.
6.3. Deploying a container image by using KVM with a QEMU disk image Copy linkLink copied to clipboard!
After you use the bootc-image-builder tool to convert a bootable RHEL container image into a QEMU disk image (QCOW2) from a RHEL bootc image, you can use a virtualization software to boot the disk image to a virtual machine.
Prerequisites
-
You created a QEMU disk image (QCOW2) by using
bootc-image-builder. For instructions, see Creating QCOW2 images by using bootc-image-builder.
Procedure
By using
libvirt, create a virtual machine (VM) with the disk image that you previously created from the container image. For more details, see Creating virtual machines by using the command line.The following example uses
virt-installto create a VM. Replace<qcow2/disk.qcow2>with the path to your QEMU disk image (QCOW2) file:$ sudo virt-install \ --name bootc \ --memory 4096 \ --vcpus 2 \ --disk <qcow2/disk.qcow2> \ --import
Verification
- Connect to the VM in which you are running the container image. See Configuring bridges on a network bond to connect virtual machines with the network for more details.
Next steps
- You can push an updated version of this container image to the registry to deliver operating system updates to your running systems. See Managing RHEL bootc images.
6.4. Deploying a container image and creating a RHEL virtual machine in vSphere Copy linkLink copied to clipboard!
After creating a Virtual Machine Disk (VMDK) from a RHEL bootc image by using the bootc-image-builder tool, you can deploy it to VMware vSphere by using the vSphere GUI client. The deployment creates a VM which can be customized further before booting.
Prerequisites
- You created a container image. See Creating QCOW2 images by using bootc-image-builder.
- You pushed the container image to an accessible repository.
You configured the govc VMware CLI tool client. To use the govc VMware CLI tool client, you must set the following values in the environment:
- GOVC_URL
- GOVC_DATACENTER
- GOVC_FOLDER
- GOVC_DATASTORE
- GOVC_RESOURCE_POOL
- GOVC_NETWORK
Procedure
Create a
metadata.yamlfile and add the following information to this file:instance-id: cloud-vm local-hostname: vmnameCreate a
userdata.yamfile and add the following information to the file:#cloud-config users: - name: admin sudo: "ALL=(ALL) NOPASSWD:ALL" ssh_authorized_keys: - ssh-rsa AAA...fhHQ== your.email@example.comssh_authorized_keysis your SSH public key. You can find your SSH public key in~/.ssh/id_rsa.pub.Export the
metadata.yamlanduserdata.yamlfiles to the environment, compressed withgzip, encoded inbase64as follows. You will use these files in further steps.export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \ USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })Launch the image on vSphere with the
metadata.yamlanduserdata.yamlfiles:Import the
.vmdkimage in to vSphere:$ govc import.vmdk ./composer-api.vmdk <foldername>Create the VM in vSphere without powering it on:
govc vm.create \ -net.adapter=vmxnet3 \ -m=4096 -c=2 -g=rhel8_64Guest \ -firmware=bios -disk="foldername/composer-api.vmdk” \ -disk.controller=ide -on=false \ vmnameChange the VM to add ExtraConfig variables, the cloud-init config:
govc vm.change -vm vmname \ -e guestinfo.metadata="${METADATA}" \ -e guestinfo.metadata.encoding="gzip+base64" \ -e guestinfo.userdata="${USERDATA}" \ -e guestinfo.userdata.encoding="gzip+base64" .. Power-on the VM: govc vm.power -on vmnameRetrieve the VM IP address:
HOST=$(govc vm.ip vmname)
Verification
Connect to the VM in which you are running the container image. See Connecting to virtual machines for more details.
Use SSH to log in to the VM, using the user-data specified in
cloud-initfile configuration:$ ssh admin@HOST
Next steps
- You can push an updated version of this container image to the registry to deliver operating system updates to your running systems. See Managing RHEL bootc images.
6.5. Deploying a container image to AWS by using an AMI disk image Copy linkLink copied to clipboard!
After using the bootc-image-builder tool to create an AMI from a bootc image, and uploading it to a AWS s3 bucket, you can deploy a container image to AWS by using the AMI disk image.
Prerequisites
- You created an Amazon Machine Image (AMI) from a bootc image. See Creating AMI images by using bootc-image-builder and uploading them to AWS.
-
cloud-initis available in the Containerfile that you previously created so that you can create a layered image for your use case.
Procedure
- In a browser, access Service→EC2 and log in.
- On the AWS console dashboard menu, choose the correct region. The image must have the Available status, to indicate that it was correctly uploaded.
- On the AWS dashboard, select your image and click .
- In the new window that opens, choose an instance type according to the resources you need to start your image. Click .
- Review your instance details. You can edit each section if you need to make any changes. Click .
- Before you start the instance, select a public key to access it. You can either use the key pair you already have or you can create a new key pair.
Click to start your instance. You can check the status of the instance, which displays as Initializing.
After the instance status is Running, the button becomes available.
- Click . A window appears with instructions on how to connect by using SSH.
Run the following command to set the permissions of your private key file so that only you can read it. See Connect to your Linux instance.
$ chmod 400 <your-instance-name.pem>Connect to your instance by using its Public DNS:
$ ssh -i <your-instance-name.pem>ec2-user@<your-instance-IP-address>NoteYour instance continues to run unless you stop it.
Verification
After launching your image, you can:
- Try to connect to http://<your_instance_ip_address> in a browser.
- Check if you are able to perform any action while connected to your instance by using SSH.
Next steps
- You can push an updated version of this container image to the registry to deliver operating system updates to your running systems.
6.6. Deploying a custom ISO container image in disconnected environments Copy linkLink copied to clipboard!
By using bootc-image-builder to convert a bootc image to an ISO image, you create a system similar to the RHEL ISOs available for download, except that your container image content is embedded in the ISO disk image. You do not need to have access to the network during installation. You can install the ISO disk image that you created from bootc-image-builder to a bare metal system.
Prerequisites
- You have created a customized container image.
Procedure
-
Create a custom installer ISO disk image with
bootc-image-builder. See Creating ISO images by using bootc-image-builder. - Copy the ISO disk image to a USB flash drive.
- Perform a bare-metal installation by using the content in the USB stick into a disconnected environment.
Next steps
- After you deploy your container image, you can push an updated version of this container image to the registry to deliver operating system updates to your running systems. See Managing RHEL bootc images.
6.7. Injecting configuration into the resulting disk images with bootc-image-builder Copy linkLink copied to clipboard!
You can inject configuration into a customized image by using a build config, that is, a .toml or a .json file with customizations for the resulting image. The build config file is mapped into the container directory to /config.toml. The customizations are specified under a customizations object. The following example shows how to add a user to the resulting disk image:
Procedure
Create a
./config.toml. The following example shows how to add a user to the disk image.[[customizations.user]] name = "user" password = "pass" key = "ssh-rsa AAA ... user@email.com" groups = ["wheel"]-
name- Mandatory. Name of the user. -
password- Not mandatory. Nonencrypted password. -
key- Not mandatory. Public SSH key contents. -
groups- Not mandatory. An array of groups to add the user into.
-
Run
bootc-image-builderand pass the following arguments, including the./config.toml:$ sudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v ./config.toml:/config.toml \ -v ./output:/output \ registry.redhat.io/rhel10/bootc-image-builder:latest \ --type qcow2 \ --config config.toml \ quay.io/<namespace>/<image>:<tag>Launch a VM, for example, by using
virt-install:$ sudo virt-install \ --name bootc \ --memory 4096 \ --vcpus 2 \ --disk qcow2/disk.qcow2 \ --import \ --os-variant rhel10
Verification
Access the system with SSH:
# ssh -i /<path_to_private_ssh-key> <user1>_@_<ip-address>
6.8. Advanced installation with to-filesystem and to-disk Copy linkLink copied to clipboard!
Install Red Hat Enterprise Linux Image Mode content directly to a mounted filesystem to support advanced provisioning scenarios. By using the bootc-install-to-filesystem command, you can populate custom partition layouts or generate bootable disk images without requiring a standard installer boot process.
The bootc install command contains two subcommands: bootc install to-disk and bootc install to-filesystem.
-
The
bootc-install-to-filesystemperforms installation to the target filesystem. The
bootc install to-disksubcommand consists of a set of opinionated lower-level tools that you can also call independently. The command consists of the following tools:-
mkfs.$fs /dev/disk -
mount /dev/disk /mnt -
bootc install to-filesystem --karg=root=UUID=<uuid of /mnt> --imgref $self /mnt
-
6.9. Deploying a container image to bare metal by using bootc install Copy linkLink copied to clipboard!
You can perform a bare-metal installation to a device by using a RHEL ISO image. Bootc contains a basic build installer and it is available by using the following methods: bootc install to-disk or bootc install to-filesystem.
-
bootc install to-disk: By using this method, you do not need to perform any additional steps to deploy the container image, because the container images include a basic installer. -
bootc install to-filesystem: By using this method, you can configure a target device and root filesystem by using a tool of your choice, for example, LVM.
Prerequisites
- You have downloaded a RHEL 10 Boot ISO from Red Hat for your architecture. See Downloading RHEL boot images.
- You have created a configuration file.
Procedure
Inject a configuration into the running ISO image.
By using
bootc install to-disk:$ podman run \ --rm --privileged \ --pid=host -v /dev:/dev \ -v /var/lib/containers:/var/lib/containers \ --security-opt label=type:unconfined_t <image> bootc install to-disk <path-to-disk>By using
bootc install to-filesystem:$ podman run \ --rm --privileged \ --pid=host -v /:/target \ -v /dev:/dev \ -v /var/lib/containers:/var/lib/containers \ --security-opt label=type:unconfined_t <image> bootc install to-filesystem <path-to-disk>
Next steps
- After you deploy your container image to a bare-metal environment, you can push an updated version of this container image to the registry to deliver operating system updates to your running systems. See Managing RHEL bootable images.
6.10. Deploying a container image by using a single command Copy linkLink copied to clipboard!
Deploy a container image to a RHEL cloud instance by using the system-reinstall-bootc command. With a single command, you can deploy a bootc image to a new RHEL instance, such as RHEL 10 on AWS, and requires you to select or create an SSH key during instance launch for secure access.
The system-reinstall-bootc command provides an interactive CLI that wraps the bootc install to-existing root command and can perform two actions:
- Pull the supplied image to set up SSH keys or access the system.
-
Run the
bootc install to-existing-rootcommand with all the bind mounts and SSH keys configured.
Prerequisites
- A Red Hat Account or Access to Red Hat RPMs.
- A package-based RHEL (9.6 / 10.0 or greater) virtual system running in an AWS environment.
- Ability and permissions to SSH into the package system and make "destructive changes".
Procedure
After the instance starts, connect to it by using SSH using the key you selected when creating the instance:
$ ssh -i <ssh-key-file> <cloud-user@ip>Make sure that the
system-reinstall-bootcsubpackage is installed:# rpm -q system-reinstall-bootcIf not, install the
system-reinstall-bootcsubpackage:# dnf -y install system-reinstall-bootcConvert the system to use a bootc image:
# system-reinstall-bootc <image>- You can use the container image from the Red Hat Ecosystem Catalog or the customized bootc image built from a Containerfile.
- Select users to import to the bootc image by pressing the "a" key.
- Confirm your selection twice and wait until the image is downloaded.
Reboot the system:
# rebootRemove the stored SSH host key for the given
<ip>from your/.ssh/known_hostsfile:# ssh-keygen -R <ip>The bootc system is now using a new public SSH host key. When attempting to connect to the same IP address with a different key than what is stored locally, SSH will raise a warning or refuse the connection due to a host key mismatch. Since this change is expected, the existing host key entry can be safely removed from the
~/.ssh/known_hostsfile using the following command.Connect to the bootc system:
# ssh -i <ssh-key-file> root@<ip>
Verification
Confirm that the system OS has changed:
# bootc status
6.11. Accessing private bootc container registries Copy linkLink copied to clipboard!
You can use private container registries on your bootc deployments. The bootc images use pull secrets, enabling you to use private images to manage your system provisioning.
6.11.1. Enabling bootc to access private registries Copy linkLink copied to clipboard!
The bootc has no way to disable TLS verification when accessing a registry. You can globally disable TLS verification when accessing a private registry by using the /etc/containers/registries.conf.d configuration.
Prerequisites
- Access to a private registry.
Procedure
Disable the TLS verification:
# /etc/containers/registries.conf.d/local-registry.conf [[registry]] location="localhost:5000" insecure=true
Verification
- Check if you disabled TLS verification.
Next steps
- Configure bootc to access private registries.
6.11.2. Configuring bootc to access private registries Copy linkLink copied to clipboard!
To pull container images from private registries, you must provide valid authentication credentials to the bootc workflow. By symlinking the bootc and Podman credential paths to a common persistent file embedded in your container image, you can maintain a single source of truth for pull secrets across your environment.
Prerequisites
-
Bootcenables private access to registries.
Procedure
Create the
/usr/lib/container-auth.jsonregistry authentication file and add the following content:# Make /run/containers/0/auth.json (a transient runtime file) # a symlink to our /usr/lib/container-auth.json (a persistent file) # which is also symlinked from /etc/ostree/auth.json. d /run/containers/0 0755 root root - L /run/user/0/containers/auth.json - - - - ../../../../usr/lib/container-auth.jsonIn the same directory, create a Containerfile. For example:
# This example expects a secret named "creds" to contain # the registry pull secret. To build, use e.g. # podman build --secret id=creds,src=$HOME/.docker/config.json ... FROM quay.io/<namespace>/<image>:_<tag>_ # Use a single pull secret for bootc and podman by symlinking both locations # to a common persistent file embedded in the container image. # We just make up /usr/lib/container-auth.json COPY containers-auth.conf /usr/lib/tmpfiles.d/link-podman-credentials.conf RUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \ chmod 0600 /usr/lib/container-auth.json && \ ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json-
Place the
container-auth.jsonfile at/etc/ostree/auth.jsonto configure the private registry authentication.
6.11.3. Configuring private bootc container registries with an Anaconda script Copy linkLink copied to clipboard!
You can use an Anaconda %pre script to configure a pull secret by creating an /etc/ostree/auth.json registry authentication file.
In addition to registry configuration, private registries require authentication. To use a private repository when deploying a fleet of bootc instances, follow the steps:
Prerequisites
-
Bootcenables private access to registries.
Procedure
Create the
auth.jsonregistry authentication file.%pre mkdir -p /etc/ostree cat > /etc/ostree/auth.json << 'EOF' { "auths": { "quay.io": { "auth": "<your secret here>" } } } EOF %end-
Place the
auth.jsonfile at/etc/ostree/auth.jsonto configure the private registry authentication.
6.11.4. Accessing private registries with pull secrets on Anaconda installations Copy linkLink copied to clipboard!
The default Anaconda installation ISO might also need a duplicate copy of a "bootstrap" configuration to access the targeted registry when fetching over the network.
You can use the Anaconda %pre command to perform arbitrary changes to the installation environment before it fetches the target bootc container image.
Prerequisites
- Bootc enables private access to registries.
Procedure
Configure the pull secret. The following is an example:
%pre mkdir -p /etc/ostree cat > /etc/ostree/auth.json << 'EOF' { "auths": { "quay.io": { "auth": "<your secret here>" } } } EOF %endDisable TLS for an insecure registry:
%pre mkdir -p /etc/containers/registries.conf.d/ cat > /etc/containers/registries.conf.d/local-registry.conf << 'EOF' [[registry]] location="[IP_Address]:5000" insecure=true EOF %end
6.11.5. Anaconda access to private bootc container registries Copy linkLink copied to clipboard!
When performing a network-based installation by using a default Anaconda ISO, the installation environment itself might not be able to access the target bootc container registry if it is private or insecure.
The Anaconda installer runs first, then it fetches the bootc container image to install. If this image is on a private registry, requiring authentication or an insecure registry, requiring TLS to be disabled, the Anaconda installer fails because it does not have a bootstrap configuration by default.
To solve this, you must inject a duplicate copy of the necessary configuration into the installation environment before it attempts to pull the image.
You can solve this issue by following one of the following solutions:
-
Use Anaconda
%prescripts to configure a pull secret. -
Use Anaconda
%prescripts to disable TLS for an insecure registry.
6.12. Deploying an image mode update in offline and air-gapped environments Copy linkLink copied to clipboard!
With image mode for Red Hat Enterprise Linux, you can deploy updates to RHEL systems in offline and air-gapped environments by using external storage to transfer container images.
To deploy an image mode update onto a host machine, you need a network connection to access a registry and get updates. However, when your operational environment requires specific architectural factors, such as hardware specifications, stringent security mandates, location-based network limitations, or scheduled updates when remote access is unavailable, you can perform system updates fully offline and air-gapped.
Offline updates can be time-consuming when you use them on many devices and might require on-site capability to deploy the updates.
Prerequisites
- A running RHEL system containing the updates that you want to make to the system.
- A running RHEL system with Red Hat Enterprise Linux 10 deployed on the target hardware.
-
The
container-toolsmeta-package is installed. Themeta-packagecontains all container tools, such as Podman, Buildah, and Skopeo. - Access to a registry or a locally stored container.
- An external storage device for the container requires an update.
Procedure
Verify which storage devices are already connected to your system.
$ lsblk NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS zram0 251:0 8G 0 disk [SWAP] nvme0n1 259:0 476.9G 0 disk ├─nvme0n1p1 259:1 600M 0 part /boot/efi ├─nvme0n1p2 259:2 1G 0 part /boot └─nvme0n1p3 259:3 475.4G 0 partConnect your external storage and run the same command. Compare the two outputs to find the name of your external storage device on your system.
$ lsblk NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS sda 8:0 28.9G 0 disk └─sda1 8:1 28.9G 0 part zram0 251:0 8G 0 disk [SWAP] nvme0n1 259:0 476.9G 0 disk ├─nvme0n1p1 259:1 600M 0 part /boot/efi ├─nvme0n1p2 259:2 1G 0 part /boot └─nvme0n1p3 259:3 475.4G 0 partIn this case, the USB drive whose name is
sdahas ansda1partition.The
MOUNTPOINTScolumn lists the mount points of the partitions on your external storage. If your system automatically mounts external storage, then valid mount points already exist. However, if there are no mount points, you must mount it yourself before you can store anything on the device.Create an empty directory, or use an existing one, to mount your partition:
$ sudo mkdir /mnt/usb/Mount your device partition.
$ sudo mount /dev/sda1 /mnt/usbOptional: Verify if the partition was correctly created:
$ lsblk NAME MAJ:MIN SIZE RO TYPE MOUNTPOINTS sda 8:0 28.9G 0 disk └─sda1 8:1 28.9G 0 part /mnt/usb [...]Your external storage device is ready for copying files onto it.
Copy the container stored locally to your mounted device by using the
skopeocommand, and adapting the paths and names of the container for your own environment:For local storage:
$ sudo skopeo copy --preserve-digests --all \ containers-storage:localhost/rhel-container:latest \ oci://mnt/usb/For a container stored on a remote registry:
$ sudo skopeo copy --preserve-digests --all \ docker://quay.io/example:latest \ oci://mnt/usb/NoteDepending on the size of the container, these commands might take a few minutes to complete.
Unmount and eject the external storage:
$ sudo umount /dev/sda1 $ sudo eject /dev/sda1- Apply the update to the container on the offline system.
-
Plug the external storage device into your offline system. If the storage device does not mount automatically, use the
mkdirandmountcommands to locate the external storage and mount it. Copy the container from the external device over to the offline system’s local container registry. Copy the container to the offline machine’s local container storage:
$ skopeo copy --preserve-digests --all \ oci://mnt/usb \ containers-storage:rhel-update:latestIn this case, the mount point of the external storage is the path to the
OCIsection, while thecontainers-storagesection varies depending on the name and tag you want the container to have.Use Podman to verify that your container is now local:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE example.io/library/rhel-update latest cdb6d... 1 min 1.48 GBDeploy the update to the container on the offline system by using
bootc:$ bootc switch --transport containers-storage \ example.io/library/rhel-update:latestIf you cannot copy your container to local storage, use the
oci transportflag and the path to your storage device instead:$ bootc switch --transport oci /mnt/usbWith the
--transportflag in thebootc switchcommand, you can specify an alternative source for the container.By default,
bootcattempts to pull from a registry because thebootc-image-builderuses a registry to build the original image. When usingbootc upgrade, you cannot specify where an update is located. By using thebootc switchand specifying that you are using local container storage, you cannot only remove the requirement of a remote registry, but also deploy updates by using this local container in the future.You can now successfully use the
bootc upgrade, provided that your local container and the update share the same location. If you want to switch to updates on a remote repository in the future, you must usebootc switchagain.
Verification
Ensure that the update was properly deployed:
$ bootc status Staged image: containers-storage:example.io/library/rhel-update:latest Digest: sha256: 05b1dfa791... Version: 10.0 (2025-07-07 18:33:19.380715153 UTC) Booted Image: localhost/rhel-intel:base Digest: sha256: 7d6f312e09... Version: 10.0 (2025-06-23 15:58:12.228704562 UTC)The output shows your current booted image, along with any changes staged to happen. The container you used earlier is visible, but the staged changes do not happen until the next reboot. The output also confirms that updates will be pulled from your container storage.
Reboot the system:
$ bootc status Booted image: containers-storage:example.io/library/rhel-update:latest Digest: sha256: 05b1dfa791... Version: 10.0 (2025-07-07 18:33:19.380715153 UTC) Rollback image: localhost/rhel-intel:base Digest: sha256: 7d6f312e09... Version: 10.0 (2025-06-23 15:58:12.228704562 UTC)You can verify that you have booted into the correct image:
- The booted image is your updated image.
- The rollback image is your previous image. You have successfully performed an offline image mode update.