Composing, installing, and managing RHEL for Edge images
Creating, deploying, and managing Edge systems with RHEL 10
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We are committed to providing high-quality documentation and value your feedback. To help us improve, you can submit suggestions or report errors through the Red Hat Jira tracking system.
Procedure
Log in to the Jira website.
If you do not have an account, select the option to create one.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the window.
Chapter 1. Introducing RHEL for Edge images Copy linkLink copied to clipboard!
A RHEL for Edge image is an rpm-ostree image that includes system packages to remotely install RHEL on Edge servers.
The system packages include:
-
Base OSpackage - Podman as the container engine
- Additional RPM Package Manager (RPM) content
Differently from RHEL images, RHEL for Edge is an immutable operating system, that is, it contains a read-only root directory with the following characteristics:
- The packages are isolated from the root directory.
- Each version of the operating system is a separate deployment. Therefore, you can roll back the system to a previous deployment when needed.
- Offers efficient updates over the network.
- Supports multiple operating system branches and repositories.
-
Contains a hybrid
rpm-ostreepackage system
You can deploy a RHEL for Edge image on Bare Metal, Appliance, and Edge servers.
With a RHEL for Edge image, you can achieve the following benefits:
- Atomic upgrades
- You know the state of each update, and no changes are seen until you reboot your system.
- Custom health checks and intelligent rollbacks
- You can create custom health checks, and if a health check fails, the operating system rolls back to the previous stable state.
- Container-focused workflow
- The image updates are staged in the background, minimizing any workload interruptions to the system.
- Optimized Over-the-Air updates
- You can make sure that your systems are up-to-date, even with intermittent connectivity, thanks to efficient over-the-air (OTA) delta updates.
1.1. Difference between RHEL RPM images and RHEL for Edge images Copy linkLink copied to clipboard!
You can create RHEL system images in traditional package-based RPM format and also as RHEL for Edge (rpm-ostree) images.
You can use the traditional package-based RPMs to deploy RHEL on traditional data centers. However, with RHEL for Edge images you can deploy RHEL on servers other than traditional data centers. These servers include systems where processing of large amounts of data is done closest to the source where data is generated, the Edge servers.
The RHEL for Edge (rpm-ostree) images are not a package manager. They only support complete bootable file system trees, not individual files. These images do not have information regarding the individual files such as how these files were generated or anything related to their origin.
The rpm-ostree images need a separate mechanism, the package manager, to install additional applications in the /var directory. With that, the rpm-ostree image keeps the operating system unchanged, while maintaining the state of the /var and /etc directories. The atomic updates enable rollbacks and background staging of updates.
Refer to the following table to know how RHEL for Edge images differ from the package-based RHEL RPM images.
| Key attributes | RHEL RPM image | RHEL for Edge image |
|
| You can assemble the packages locally to form an image. | The packages are assembled in an OSTree which you can install on a system. |
|
|
You can use |
You can use |
|
| The package contains DNF repositories | The package contains OSTree remote repository |
|
| Read write |
Read-only ( |
|
|
You can mount the image to any non |
|
Chapter 2. Migrating from rpm-ostree-based deployed systems to bootc-based systems Copy linkLink copied to clipboard!
Starting with RHEL 10.0, image mode for RHEL replaces RHEL image builder for creating edge artifacts. To build operating system images for edge deployments, you must now use image mode to produce bootable container images.
If you prefer, you can continue to use RHEL image builder on RHEL 9 to build RHEL for Edge artifacts.
To use image mode for RHEL, you can upgrade from RHEL 9 image builder to image mode for RHEL 10 and use image mode for RHEL to build bootable container images that you can use for your edge deployments.
By using the image mode for RHEL feature, you can customize your operating system with the registry.redhat.io/rhel10/rhel-bootc container image. You can also build a smaller bootc base image from scratch, similar in size and content to the standard RHEL for Edge OSTree commit.
2.1. Image mode for RHEL Copy linkLink copied to clipboard!
Image mode for Red Hat Enterprise Linux (RHEL) is a deployment method that uses a container-native approach that you can use to build, deploy, and manage the operating system as a bootc base image (rhel-bootc). The rhel-bootc contains the necessary components for a bootable operating system, such as kernel, firmware, boot loader, among others.
Use image mode for RHEL to build, test, and deploy operating systems by using the same tools and techniques as application containers. Image mode for RHEL is available by using the registry.redhat.io/rhel10/rhel-bootc bootc image. The RHEL bootc images differ from the existing application Universal Base Images (UBI) in that they contain additional components necessary to boot that were traditionally excluded, such as kernel, initrd, boot loader, firmware, among others.
Image mode for RHEL does not support rpm-ostree file system with blueprint customization. You cannot build disk images from bootc images by using osbuild-composer. Instead, use bootc-image-builder to generate disk images from bootc images.
2.2. Building customized images from a base image Copy linkLink copied to clipboard!
You can use Podman to build and test your customized container image.
Prerequisites
-
The container-toolsmeta-package is installed.
Procedure
Create a
Containerfilewith the following structure:FROM registry.redhat.io/rhel10/rhel-bootc:latest RUN dnf -y install [software] [dependencies] && dnf clean all ADD [application] ADD [configuration files] RUN [config scripts]Build the
<image>image by using theContainerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .
Verification
Build the
<image>image by using theContainerfilein the current directory:$ podman images
2.3. Equivalence between blueprint customizations and Containerfile customizations Copy linkLink copied to clipboard!
Map your blueprint customization options and their equivalent Containerfile commands.
| Blueprint | Command instructions |
|---|---|
| distro = "rhel-10." | FROM rhel-bootc:10 |
| [[packages]] name = "openssh-server" version = "8.*" | RUN dnf install <package name> |
| [[groups]] name = "anaconda-tools" | RUN dnf group install <group_name> |
| [[containers]] source = "quay.io/rhel/rhel:latest" | RUN podman pull docker.io/library/postgres:alpine |
| [customizations.kernel] name = "kernel-debug" append = "nosmt=force" |
RUN mkdir -p /usr/lib/bootc/kargs.d RUN cat <<`EOF` >> /usr/lib/bootc/kargs.d/console.toml kargs = ["console=ttyS0,114800n8","kernel-debug"] match-architectures = ["x86_64"] |
| [customizations.rhsm.config.dnf_plugins.product_id] enabled = true [customizations.rhsm.config.dnf_plugins.subscription_manager] enabled = true [customizations.rhsm.config.subscription_manager.rhsm] manage_repos = true [customizations.rhsm.config.subscription_manager.rhsmcertd] auto_registration = true | COPY ./rhsm.conf /etc/rhsm/rhsm.conf |
| [customizations.rpm.import_keys] files = [ "/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-18-primary", "/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-19-primary" ] | RUN mkdir -p /etc/pki/rpm-gpg/ COPY <host_path>/gpg_key /etc/pki/rpm-gpg//gpg_key |
| [[customizations.sshkey]] user = "root" key = "PUBLIC SSH KEY" | # SSH keys COPY test.pub container_key.pub RUN mkdir -p .ssh && \ cat container_key.pub >> .ssh/authorized_keys && \ chmod 600 .ssh/authorized_keys && \ rm -f container_path_to_key.pub |
| [customizations.timezone] timezone = "US/Eastern" ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"] | RUN ln -sf /usr/share/zoneinfo/Asia/Bangkok /etc/localtime |
| [customizations.locale] languages = ["en_US.UTF-8"] keyboard = "us" |
RUN cat <<`EOF` >> /etc/locale.conf LANG="en_US.UTF-8" |
| [customizations.firewall] ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp", "30000-32767:tcp", "30000-32767:udp"] | RUN dnf install -y firewalld && \ dnf clean all && \ firewall-offline-cmd --new-zone=customzone && \ firewall-offline-cmd --zone=customzone --set-description="Custom firewall rules for the container" && \ firewall-offline-cmd --zone=customzone --add-service=ftp && \ firewall-offline-cmd --zone=customzone --add-service=ntp && \ firewall-offline-cmd --zone=customzone --add-service=dhcp && \ firewall-offline-cmd --zone=customzone --add-port=22/tcp && \ firewall-offline-cmd --zone=customzone --add-port=80/tcp && \ firewall-offline-cmd --zone=customzone --add-port=53/tcp && \ firewall-offline-cmd --zone=customzone --add-port=53/udp && \ firewall-offline-cmd --zone=customzone --add-port=30000-32767/tcp && \ firewall-offline-cmd --zone=customzone --add-port=30000-32767/udp && \ firewall-offline-cmd --set-default-zone=customzone |
| [[customizations.directories]] path = "/etc/<dir-name>" mode = "0755" user = "root" group = "root" ensure_parents = false | #Directory: RUN mkdir /etc/<dir> RUN chown -R admin:wheel /etc/<dir> && \ chmod -R 644 /etc/<dir> #Files: RUN touch /etc/<myfile> RUN chown :widget /etc/<myfile> && \ chmod 600 /etc/<myfile> |
| [customizations] installation_device = "/dev/sda" |
RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<`EOF` >> /usr/lib/bootc/kargs.d/console.toml kargs = ["inst.device=/dev/sda"] |
| [customizations.ignition.embedded] config = "eyJpZ25pdG….xIn1dfX0=" |
RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<`EOF` >> /usr/lib/bootc/kargs.d/console.toml kargs = ["ignition.config.url=http://192.168.122.1/fiot.ign","rd.neednet=1"] |
| [customizations.fdo] manufacturing_server_url = "http://192.168.122.199:8080" diun_pub_key_insecure = "true" di_mfg_string_type_mac_iface = "enp2s0" | RUN dnf install -y fdo-init fdo-client && \ systemctl enable fdo-client-linuxapp.service |
| [customizations.openscap] datastream = "/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml" profile_id = "xccdf_org.ssgproject.content_profile_cis" [customizations.openscap.json_tailoring] profile_id = "<name-of-profile-used-in-json-tailoring>-file" filepath = "/some/path/tailoring-file.json" [[customizations.files]] path = "/the/path/tailoring-file.json" data = "<json-tailoring-file-contents>" | RUN dnf install -y openscap-utils && \ autotailor --output /some/path/tailoring-file.json \ --new-profille-id xccdf_org.ssgproject.content_profile_cis |
| [customizations] fips = true |
RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<`EOF` >> /usr/lib/bootc/kargs.d/01-fips.toml kargs = ["fips=1"] |
2.4. Achieving parity with RHEL for Edge images by using image mode for RHEL Copy linkLink copied to clipboard!
To create RHEL for Edge images by using image mode for RHEL, you must manually install some missing packages that are common to an OSTree commit. Most of these packages are part of the bootc images, and you can install the missing packages to be able to create an image similar to the RHEL for Edge image.
The following are examples of the missing packages:
-
clevis -
clevis-dracut -
clevis-luks -
greenboot -
greenboot-default-health-checks -
fdo-client -
fdo-owner-cli
Prerequisites
-
An existing RHEL for Edge
rpm-ostree-baseddeployed system.
Procedure
Create a
Containerfilewith the following content:FROM registry.io.redhat.com/rhel10/rhel-bootc:latest RUN dnf install -y \ clevis \ clevis-dracut \ clevis-luks \ greenboot \ greenboot-default-health-checks \ fdo-client \ fdo-owner-cli # (Optional) Extra packages often used in edge # RUN dnf install -y # dracut-config-generic \ # platform-python \ # pinentry \ # firewalld \ # iptables \ # NetworkManager-wifi \ # NetworkManager-wwan \ # wpa_supplicant \ # traceroute \ # rootfiles \ # policycoreutils-python-utils \ # setools-console \ # rsync \ # usbguard RUN systemctl enable NetworkManager.service \ greenboot-grub2-set-counter.service \ greenboot-grub2-set-success.service \ greenboot-healthcheck.service \ greenboot-rpm-ostree-grub2-check-fallback.service \ greenboot-status.service \ greenboot-task-runner.service redboot-auto-reboot.service \ redboot-task-runner.service"Build your similar RHEL for Edge customized bootc image:
$ podman build -t quay.io/<namespace>/<image>:<tag> .Optional: Push the image:
$ podman push quay.io/<namespace>/<image>:<tag>
Verification
List all images:
$ podman images
2.4.1. Building RHEL 9.6 and later for Edge images by using image mode Copy linkLink copied to clipboard!
In RHEL 9.6 and later, you can use image mode to compose bootable container images and generate disk images for edge deployments. To create an image mode for RHEL system for an edge host, define your configuration in a Containerfile and use bootc-image-builder to output a deployable artifact.
While image mode is the recommended path for container-native workflows, you can continue to use the traditional RHEL image builder to create standard RHEL 9.6 and later edge artifacts. See Composing, installing, and managing RHEL for Edge images.
Prerequisites
- You have Podman installed on your host machine.
-
You have root access to run the
bootc-image-buildertool, and run the containers in--privilegedmode, to build the images.
Procedure
Create a
Containerfile, for example:$ cat Containerfile FROM registry.redhat.io/rhel9/rhel-bootc:9.6 # Packages RUN dnf install -y zsh && dnf clean all # Group install RUN dnf group -y install "Development Tools" # Hostname RUN echo "rock.paper.scissor" > /etc/hostname # Kernel RUN mkdir -p /usr/lib/bootc/kargs.d RUN cat <<EOF >> /usr/lib/bootc/kargs.d/console.toml kargs = ["console=ttyS0,114800n8","kernel-debug"] match-architectures = ["x86_64"] EOF # Subscription-manager RUN dnf install subscription-manager # RPM config RUN mkdir -p /etc/pki/rpm-gpg/ COPY <host_path>/gpg_key /etc/pki/rpm-gpg/gpg_key # Timezones RUN cat <<EOF >> /etc/localtime Asia/Bangkok EOF # Locale RUN cat <<EOF >> /etc/locale.conf LANG="en_US.UTF-8" EOF && \ cat <<EOF >> /etc/vconsole.conf KEYMAP=us EOF # firewall RUN dnf install -y firewalld && \ mkdir -p /etc/firewalld/zones RUN cat <<EOF >> /etc/firewalld/zones/customzone.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Customzone</short> <description>Custom firewall rules for the container.</description> <!-- Allowed services --> <service name="ftp"/> <service name="ntp"/> <service name="dhcp"/> <!-- Blocked services (not explicitly listed) --> <!-- Removing telnet explicitly is unnecessary if it is not included --> <!-- Open specific ports --> <port protocol="tcp" port="22"/> <port protocol="tcp" port="80"/> <port protocol="tcp" port="53"/> <port protocol="udp" port="53"/> <port protocol="tcp" port="30000-32767"/> <port protocol="udp" port="30000-32767"/> </zone> EOF RUN firewall-offline-cmd --set-default-zone=customzone # systemd services RUN systemctl enable sshd #ignition RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<EOF >> /usr/lib/bootc/kargs.d/console.toml kargs = ["ignition.config.url=http://192.168.122.1/fiot.ign","rd.neednet=1"] EOF #fdo RUN dnf install -y fdo-init fdo-client && \ systemctl enable fdo-client-linuxapp.service #Repositories RUN mkdir -p /etc/yum.repos.d COPY custom.repo /etc/yum.repos.d/custom.repo #fips RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<EOF >> /usr/lib/bootc/kargs.d/01-fips.toml kargs = ["fips=1"] EOF RUN dnf install -y crypto-policies-scripts && update-crypto-policies --no-reload --set FIPSBuild the
<image>image by usingContainerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .
Verification
List all images:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/<namespace>/<image> latest b28cd00741b3 About a minute ago 2.1 GB
2.4.2. Building RHEL 10 and later for Edge images by using image mode Copy linkLink copied to clipboard!
Starting with RHEL 10, bootc is the primary tool for creating RHEL for Edge installations. Because RHEL image builder no longer supports traditional edge artifacts, you must use image mode to build, deploy, and manage bootable container images for edge computing environments.
Not all the available RHEL image builder artifacts are available in image mode for RHEL. That means that you cannot create certain image types by using bootc-image-builder.
Notably, the simplified-installer no longer exists. Instead, use the bootc-image-builder Anaconda ISO for workflows such as FDO.
Prerequisites
- You have Podman installed on your host machine.
-
You have root access to run the
bootc-image-buildertool, and run the containers in--privilegedmode, to build the images.
Procedure
Create a
Containerfile. The following example contains several customizations that you can use as an example, and can be removed in case it does not suit your requirements.$ cat Containerfile FROM registry.redhat.io/rhel10/rhel-bootc:10.0 # Packages RUN dnf install -y zsh && dnf clean all # Group install RUN dnf group -y install "Development Tools" # Kernel RUN mkdir -p /usr/lib/bootc/kargs.d RUN cat <<EOF >> /usr/lib/bootc/kargs.d/console.toml kargs = ["console=ttyS0,114800n8","kernel-debug"] match-architectures = ["x86_64"] EOF # Subscription-manager COPY ./rhsm.conf /etc/rhsm/rhsm.conf # RPM config RUN mkdir -p /etc/pki/rpm-gpg/ COPY <host_path>/gpg_key /etc/pki/rpm-gpg//gpg_key # Additional groups RUN groupadd -g 1001 widget # Timezones RUN ln -sf /usr/share/zoneinfo/Asia/Bangkok /etc/localtime # Locale RUN cat <<EOF >> /etc/locale.conf LANG="en_US.UTF-8" EOF && \ cat <<EOF >> /etc/vconsole.conf KEYMAP=us EOF # firewall RUN dnf install -y firewalld && \ dnf clean all && \ firewall-offline-cmd --new-zone=customzone && \ firewall-offline-cmd --zone=customzone --set-description="Custom firewall rules for the container" && \ firewall-offline-cmd --zone=customzone --add-service=ftp && \ firewall-offline-cmd --zone=customzone --add-service=ntp && \ firewall-offline-cmd --zone=customzone --add-service=dhcp && \ firewall-offline-cmd --zone=customzone --add-port=22/tcp && \ firewall-offline-cmd --zone=customzone --add-port=80/tcp && \ firewall-offline-cmd --zone=customzone --add-port=53/tcp && \ firewall-offline-cmd --zone=customzone --add-port=53/udp && \ firewall-offline-cmd --zone=customzone --add-port=30000-32767/tcp && \ firewall-offline-cmd --zone=customzone --add-port=30000-32767/udp && \ firewall-offline-cmd --set-default-zone=customzone # systemd services RUN systemctl enable httpd sshd && \ systemctl disable telnetd && \ systemctl mask rcpbindBuild the
<image>image by usingContainerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .
Verification
List all images:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/<namespace>/<image> latest b28cd00741b3 About a minute ago 2.1 GB
2.5. Installing bootc-image-builder Copy linkLink copied to clipboard!
To install the bootc-image-builder, use the Red Hat Container Registry. The bootc-image-builder is intended to be used as a container and it is not available as an RPM package in RHEL.
Prerequisites
-
The
container-toolsmeta-package is installed. The meta-package contains all container tools, such as Podman, Buildah, and Skopeo. -
You are authenticated to
registry.redhat.io. For details, see Red Hat Container Registry Authentication.
Procedure
Log in to authenticate to
registry.redhat.io:$ sudo podman login registry.redhat.ioInstall the
bootc-image-buildertool:$ sudo podman pull registry.redhat.io/rhel10/bootc-image-builder
Verification
List all images pulled to your local system:
$ sudo podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/rhel10/bootc-image-builder latest b361f3e845ea 24 hours ago 676 MB
2.5.1. Using bootc-image-builder to create RHEL 9.6 disk images Copy linkLink copied to clipboard!
You can use bootc-image-builder to create bootable disk images from existing container images and deploy these disk images by using your traditional methods for your physical or virtual hosts.
Prerequisites
- You have Podman installed on your host machine.
-
You have root access to run the
bootc-image-buildertool, and run the containers in--privilegedmode, to build the images.
Procedure
Optional: Create a
config.tomlto configure user access, for example:[[customizations.user]] name = "user" password = "pass" key = "ssh-rsa AAA ... user@email.com" groups = ["wheel"]Manually pull the image:
$ sudo podman pull quay.io/quay.io/<_namespace_>/<_image_>:<_tag_>Create the
outputdirectory for the image that you are building:$ mkdir outputRun
bootc-image-builderto create the image. If you do not want to add any configuration, omit the-v $(pwd)/config.toml:/config.tomlargument.$ sudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v /var/lib/containers/storage:/var/lib/containers/storage \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ registry.redhat.io/rhel9/bootc-image-builder:latest \ --type iso \ --config /config.toml \ quay.io/<namespace>/<image>:<tag>You can find the
.isoimage in the output folder.
2.5.2. Using bootc-image-builder to create RHEL 10.0 disk images Copy linkLink copied to clipboard!
Starting with RHEL 10 and later, RHEL image builder no longer supports composing customized RHEL rpm-ostree images optimized for Edge. To create new RHEL images for Edge environments as part of RHEL 10, you must use image mode for RHEL.
Not all the available RHEL image builder artifacts are available in image mode. That means that you cannot create certain image types by using bootc-image-builder. The simplified-installer no longer exists. Instead, use the bootc-image-builder Anaconda ISO for FDO workflow.
Prerequisites
- You have Podman installed on your host machine.
-
You have root access to run the
bootc-image-buildertool, and run the containers in--privilegedmode, to build the images.
Procedure
Optional: Create a
config.tomlto configure user access, for example:[[customizations.user]] name = "user" password = "pass" key = "ssh-rsa AAA ... user@email.com" groups = ["wheel"]Manually pull the image:
$ sudo podman pull quay.io/<namespace>/<image>:_<tag>_Create the
outputdirectory for the image that you are building:$ mkdir outputRun
bootc-image-builderto create the image. If you do not want to add any configuration, omit the-v $(pwd)/config.toml:/config.tomlargument.$ sudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v /var/lib/containers/storage:/var/lib/containers/storage \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ registry.redhat.io/rhel10/bootc-image-builder:latest \ --type iso \ --config /config.toml \ quay.io/<namespace>/<image>:<tag>You can find the
.isoimage in the output folder.
2.6. Configuring Users and Groups in an Image Mode Containerfile Copy linkLink copied to clipboard!
When switching between different host builds, you must define persistent users and groups within your Containerfile into the bootable container image. This enables you to maintain consistent permissions across your infrastructure when deploying or performing a bootc switch to migrate existing hosts.
Some user and group IDs differ between rhel-bootc images and RHEL for Edge. This affects several groups and users, such as ssh_keys. As a consequence, the private keys belong to a group with a misconfigured ID, and you will not be able to use public keys to access the edge system.
The Image Mode system uses altfiles to manage users at /user/lib/passwd and groups at /user/lib/group. To workaround this, you must extract the groups and users information of the existing RHEL for Edge system and fixating them as part of the Containerfile. Configure the Containerfile to copy a local lib/group file to the container image.
You can manually change the permissions of private keys, the /etc folder is mutable in edge systems. However, it does not solve the problem, because after running the bootc switch command, the Image Mode based system has the ssh_keys group configured with the ID 999. This value comes from the RHEL bootc base image, and this drift from ID 101 to ID 999 left the edge system unreachable through SSH. To fix this issue, follow the steps:
Prerequisites
-
An existing RHEL for Edge
rpm-ostreebased system. - You have a subscribed RHEL 9 system. For more information, see Getting Started with RHEL System Registration documentation.
- You have a container registry. You can create your registry locally or create a free account on the Quay.io service. To create the Quay.io account, see Red Hat Quay.io page.
- You have a Red Hat account with either production or developer subscriptions. No cost developer subscriptions are available on the Red Hat Enterprise Linux Overview page.
- You have authenticated to registry.redhat.io. For more information, see Red Hat Container Registry Authentication article.
Procedure
Extract the information of users and groups from the RHEL for Edge system.
$ mkdir -p ./usr/lib $ ssh admin@192.168.100.50 'cat /lib/passwd' > ./usr/lib/passwd $ ssh admin@192.168.100.50 'cat /lib/group' > ./usr/lib/groupInclude the missing RHEL for Edge packages in the bootc based system by specifying them in a Containerfile. Additionally, use the COPY command to include the
groupandpasswdcontent that was extracted from the RHEL for Edge system. The following is an example:FROM registry.redhat.io/rhel9/rhel-bootc WORKDIR /tmp RUN dnf -y install ModemManager \ NetworkManager-wifi \ NetworkManager-wwan \ audit \ checkpolicy \ clevis \ clevis-dracut \ clevis-luks \ clevis-pin-tpm2 \ clevis-systemd \ containernetworking-plugins \ dnsmasq \ dracut-config-generic \ fdo-client \ fdo-owner-cli \ firewalld \ firewalld-filesystem \ greenboot \ greenboot-default-health-checks \ grubby \ ignition \ ignition-edge \ ipset \ iwl100-firmware \ iwl1000-firmware \ iwl105-firmware \ iwl135-firmware \ iwl2000-firmware \ iwl2030-firmware \ iwl3160-firmware \ iwl5000-firmware \ iwl5150-firmware \ iwl6050-firmware \ iwl7260-firmware \ libsecret \ pinentry \ policycoreutils-python-utils \ python3-distro \ python3-setools \ rsync \ setools-console \ tmux \ traceroute \ usbguard \ usbguard-selinux \ wireless-regdb \ wpa_supplicant COPY etc /etc # You can find the passwd and group content that were extracted from the RHEL for Edge system usr/lib/ in your current working directory. You can copy the content into the container image with the following step: COPY usr /usrBuild the bootc image and push it to the registry:
$ podman build -f Containerfile -t quay.io/<namespace>/<image>:<tag> . $ podman push quay.io/<namespace>/<image>:<tag>Run the
bootc switchcommand to the newly created bootable container image.$ ssh admin@192.168.100.50 $ sudo bootc switch quay.io/<namespace>/<image>:<tag> $ sudo reboot
Verification
After rebooting the edge system into the bootable container image, confirm that the contents of /lib/passwd and /lib/group match the content that was extracted from the OSTree system.
Check the content of
/lib/passwd.$ cat /lib/passwdCheck the content of
/lib/group.$ cat /lib/group
2.6.1. Converting a RHEL 9.6 raw image deployment to image mode Copy linkLink copied to clipboard!
Use an RHEL 9.6 or later for Edge deployed as a raw image to convert it to image mode for RHEL. After the conversion, you can rebase your current operating system onto a bootable container image, enabling the bootc update workflow for future lifecycle management.
Prerequisites
-
An existing 9.6 RHEL for Edge installed with a
rawimage.
Procedure
- Update your image. See Updating RHEL for Edge images.
Switch your existing image from RHEL image builder to image mode.
Build an image from
rhel-bootc. For example:$ cat Containerfile FROM registry.redhat.io/rhel9/rhel-bootc:latest RUN dnf install -y \ clevis \ clevis-dracut \ clevis-luks \ fdo-client \ fdo-owner-cli
Build the <image> image by using
Containerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .Push the image to a registry.
$ podman push quay.io/<namespace>/<image>:_<tag>_Run
bootc switchon the device.$ bootc switch quay.io/<namespace>/<image>:_<tag>_Run
systemctl reboot.$ sudo systemctl reboot
Verification
Connect to your RHEL for Edge system and use
bootc status:# bootc status
2.6.2. Converting a RHEL 9.6 for Edge simplified installer deployment to image mode Copy linkLink copied to clipboard!
You can convert existing RHEL 9.6 and later for Edge systems, deployed by using the simplified-installer, to image mode for RHEL. The resulting bootable container image turns into a dynamic bootc workflow without requiring a physical re-installation.
Prerequisites
-
An existing 9.6 or later RHEL for Edge installed with a
simplified-installerimage.
Procedure
Check if
bootcis installed:$ rpm -qa | bootcUpdate your image to the latest
rpm-ostreeinstallation. See Updating RHEL for Edge images.Build an image from
rhel-bootc. For example:$ cat Containerfile FROM registry.redhat.io/rhel9/rhel-bootc:latest RUN dnf install -y \ clevis \ clevis-dracut \ clevis-luks \ fdo-client \ fdo-owner-cli
Build the <image> image by using
Containerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .Push the image to a registry.
$ podman push quay.io/<namespace>/<image>:_<tag>_Run
bootc switchto switch the device to the image you pushed to the registry.$ bootc switch quay.io/<namespace>/<image>:_<tag>_Run systemctl reboot.
$ sudo systemctl reboot
Verification
Connect to your RHEL for Edge system and use
bootc status:# bootc status
2.6.3. Upgrading existing RHEL for Edge 9.6 to RHEL 10 and later image mode Copy linkLink copied to clipboard!
You can upgrade an existing RHEL for Edge 9.6 system to RHEL 10.0 by using image mode for RHEL. By converting your current systems to a bootc workflow, you can manage host updates by using container images while retaining your existing edge deployments.
Prerequisites
- An existing 9.6 or later RHEL for Edge system.
Procedure
Update your image. See Updating RHEL for Edge images.
$ sudo rpm-ostree upgrade $ sudo systemctl rebootBuild a bootc image that uses RHEL 10.0. For example:
$ cat Containerfile FROM registry.redhat.io/rhel10/rhel-bootc:10.0 RUN dnf install -y \ clevis \ clevis-dracut \ clevis-luks \ fdo-client \ fdo-owner-cliBuild the <image> image by using
Containerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .Push the image to a registry.
$ podman push quay.io/<namespace>/<image>:_<tag>_Run
bootc switchon the device.$ bootc switch quay.io/<namespace>/<image>:_<tag>_Reboot the system.
$ sudo systemctl reboot
Verification
Connect to your RHEL for Edge system and use
bootc status:# bootc status
Chapter 3. Automatically provisioning and onboarding RHEL for Edge devices with FDO Copy linkLink copied to clipboard!
Use image mode for RHEL to build operating system images optimized for edge deployments. These images support the FIDO Device Onboarding (FDO) process, which automates device provisioning and securely integrates your edge devices with your network infrastructure.
The FIDO Device Onboarding (FDO) process automatically provisions and onboards your Edge devices, and exchanges data with other devices and systems connected to the networks.
Red Hat provides the FDO process as a Technology Preview feature and should run on secure networks. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
3.1. The FIDO Device Onboarding (FDO) process overview Copy linkLink copied to clipboard!
FIDO Device Onboarding (FDO) is an automated onboarding mechanism that securely integrates new devices into your IoT architecture. By using FDO, you can automatically verify device authenticity and deploy specific configurations, ensuring that each new device is trusted and fully synchronized with your existing infrastructure upon installation.
The FIDO Device Onboarding (FDO) is the process that performs the following steps:
- Provisions and onboards a device.
- Automatically configures credentials for this device. The FDO process is an automatic onboarding mechanism that is triggered by the installation of a new device.
- Enables this device to securely connect and interact on the network.
The FDO protocol performs the following tasks:
- Solves the trust and chain of ownership along with the automation needed to securely onboard a device at scale.
- Performs device initialization at the manufacturing stage and late device binding for its actual use. This means that actual binding of the device to a management system happens on the first boot of the device without requiring manual configuration on the device.
- Supports automated secure devices onboarding, that is, zero touch installation and onboarding that does not need any specialized person at the edge location. After the device is onboarded, the management platform can connect to it and apply patches, updates, and rollbacks.
With FDO, you can benefit from the following:
- FDO is a secure and simple way to enroll a device to a management platform. Instead of embedding a Kickstart configuration to the image, FDO applies the device credentials during the device first boot directly to the ISO image.
- FDO solves the issue of late binding to a device, enabling any sensitive data to be shared over a secure FDO channel.
- FDO cryptographically identifies the system identity and ownership before enrolling and passing the configuration and other secrets to the system. That enables non-technical users to power-on the system.
The FDO protocol is based on the following servers:
- Manufacturing server
- Generates the device credentials.
- Creates an Ownership voucher that is used to set the ownership of the device, later in the process.
- Binds the device to a specific management platform.
- Owner management system
- Receives the Ownership voucher from the Manufacturing server and becomes the owner of the associated device.
- Later in the process, it creates a secure channel between the device and the Owner onboarding server after the device authentication.
- Uses the secure channel to send the required information, such as files and scripts for the onboarding automation to the device.
- Service-info API server
- Based on Service-info API server’s configuration and modules available on the client, it performs the final steps of onboarding on target client devices, such as copying SSH keys and files, executing commands, creating users, encrypting disks and so on
- Rendezvous server
- Gets the Ownership voucher from the Owner management system and makes a mapping of the device UUID to the Owner server IP. Then, the Rendezvous server matches the device UUID with a target platform and informs the device about which Owner onboarding server endpoint this device must use.
- During the first boot, the Rendezvous server will be the contact point for the device and it will direct the device to the owner, so that the device and the owner can establish a secure channel.
- Device client
This is installed on the device. The Device client performs the following actions:
- Starts the queries to the multiple servers where the onboarding automation will be executed.
- Uses TCP/IP protocols to communicate with the servers.
At the Device Initialization, the device contacts the Manufacturing server to get the FDO credentials, a set of certificates and keys to be installed on the operating system with the Rendezvous server endpoint (URL). It also gets the Ownership Voucher, that is maintained separately in case you need to change the owner assignment.
- The Device contacts the Manufacturing server
- The Manufacturing server generates an Ownership Voucher and the Device Credentials for the Device.
- The Ownership Voucher is transferred to the Owner onboarding server.
At the On-site onboarding, the Device gets the Rendezvous server endpoint (URL) from its device credentials and contacts Rendezvous server endpoint to start the onboarding process, which will redirect it to the Owner management system, that is formed by the Owner onboarding server and the Service Info API server.
- The Owner onboarding server transfers the Ownership Voucher to the Rendezvous server, which makes a mapping of the Ownership Voucher to the Owner.
- The device client reads device credentials.
- The device client connects to the network.
- After connecting to the network, the Device client contacts the Rendezvous server.
- The Rendezvous server sends the owner endpoint URL to the Device Client, and registers the device.
- The Device client connects to the Owner onboarding server shared by the Rendezvous server.
- The Device proves that it is the correct device by signing a statement with a device key.
- The Owner onboarding server proves itself correct by signing a statement with the last key of the Owner Voucher.
- The Owner onboarding server transfers the information of the Device to the Service Info API server.
- The Service info API server sends the configuration for the Device.
- The Device is onboarded.
3.2. FDO automatic onboarding technologies Copy linkLink copied to clipboard!
The FDO automatic onboarding relies on the interaction between a set of technologies to enable secure and automated onboarding.
| Technology | Definition |
|---|---|
| UEFI | Unified Extensible Firmware Interface. |
| RHEL | Red Hat® Enterprise Linux® operating system |
|
| Background image-based upgrades. |
| Greenboot |
|
| image mode for RHEL | The new deployment method that uses containers to build system for operating system artifacts. |
| Container | A Linux® container is a set of 1 or more processes that are isolated from the rest of the system. |
| Coreos-installer | Assists installation of RHEL images, boots systems with UEFI. |
| FIDO FDO | Specification protocol to provision configuration and onboarding devices. |
3.3. Automatically provisioning and onboarding RHEL for Edge devices with FDO Copy linkLink copied to clipboard!
Build and automatically onboard RHEL for Edge systems by using podman build. By booting the resulting image, you can trigger an automated provisioning that installs RHEL for Edge onto local storage or virtualized environments.
Prerequisites
- You have installed and registered a RHEL system.
- You have an FDO infrastructure, that is, Manufacturing Server, Rendezvous server, and Owner Server, already deployed.
Procedure
Create a
Containerfileto include the FDO client and kernel arguments. For example:$ cat Containerfile FROM registry.redhat.io/rhel10/rhel-bootc:10 #fdo RUN dnf install -y fdo-init fdo-client && \ systemctl enable fdo-client-linuxapp.service RUN mkdir -p /usr/lib/bootc/kargs.d && \ cat <<`EOF` >> /usr/lib/bootc/kargs.d/console.toml kargs = ["inst.device=/dev/sda"] EOFBuild the <image> container image by using the
Containerfilein the current directory:$ podman build -t quay.io/<namespace>/<image>:<tag> .Install the resulting image on the target device and power it on.
After powering on, the system performs the following steps automatically:
- The FDO client reaches out to the Manufacturing Server to exchange initial credentials.
- The device contacts the Rendezvous Server to verify its voucher and locate its intended owner.
-
The device establishes mutual trust with the Owner Server. The
service-infoAPI then pushes the final configuration, such as SSH keys, user accounts, and encrypted filesystems, to the device.
3.4. Generating key and certificates Copy linkLink copied to clipboard!
When you install FIDO Device Onboarding (FDO) services, the infrastructure automatically generates the keys and certificates required to configure the manufacturing server. While you can manually re-create these credentials if needed, the services are designed to run with default settings immediately after installation and startup.
Red Hat provides the fdo-admin-tool tool as a Technology Preview feature and should run on secure networks. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
Prerequisites
-
You installed the
fdo-admin-cliRPM package
Procedure
Generate the keys and certificates in the
/etc/fdodirectory:$ for i in "diun" "manufacturer" "device-ca" "owner"; do fdo-admin-tool generate-key-and-cert $i; done $ ls keys device_ca_cert.pem device_ca_key.der diun_cert.pem diun_key.der manufacturer_cert.pem manufacturer_key.der owner_cert.pem owner_key.derCheck the key and certificates that were created in the
/etc/fdo/keysdirectory:$ tree keysYou can see the following output:
- device_ca_cert.pem - device_ca_key.der - diun_cert.pem - diun_key.dre - manufacturer_cert.pem - manufacturer_key.der - owner_cert.pem - owner_key.pemSee the
fdo-admin-tool generate-key-and-cert --helpcommand output for more details.
3.5. Installing and configuring the manufacturing server Copy linkLink copied to clipboard!
Use the fdo-manufacturing-server RPM package to deploy the Manufacturing Server for FDO automatic onboarding. This component handles the generation of device-specific metadata, GUIDs, and rendezvous information. By installing this service, you enable the secure creation of device credentials and owner vouchers required for the device to connect to the Rendezvous Server.
Red Hat provides the fdo-manufacturing-server tool as a Technology Preview feature and should run on secure networks because Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
To install the manufacturing server RPM package, complete the following steps:
Procedure
Install the
fdo-admin-clipackage:# dnf install -y fdo-admin-cliCheck if the
fdo-manufacturing-serverRPM package is installed:$ rpm -qa | grep fdo-manufacturing-server --refreshCheck if the files were correctly installed:
$ *ls /usr/share/doc/fdo*SSSSYou can see the following output:
Output: manufacturing-server.yml owner-onboarding-server.yml rendezvous-info.yml rendezvous-server.yml serviceinfo-api-server.ymlOptional: Check the content of each file, for example:
$ cat /usr/share/doc/fdo/manufacturing-server.ymlConfigure the Manufacturing server. You must provide the following information:
- The Manufacturing server URL
- The IP address or DNS name for the Rendezvous server
The path to the keys and certificates that you generated.
You can find an example of a Manufacturing server configuration file in the
/usr/share/doc/fdo/manufacturing-server.ymldirectory. The following is amanufacturing server.ymlexample that is created and saved in the/etc/fdodirectory. It contains paths to the directories, certificates, keys that you created, the Rendezvous server IP address and the default port.session_store_driver: Directory: path: /etc/fdo/stores/manufacturing_sessions/ ownership_voucher_store_driver: Directory: path: /etc/fdo/stores/owner_vouchers public_key_store_driver: Directory: path: /etc/fdo/stores/manufacturer_keys bind: "0.0.0.0:8080" protocols: plain_di: false diun: mfg_string_type: SerialNumber key_type: SECP384R1 allowed_key_storage_types: - Tpm - FileSystem key_path: /etc/fdo/keys/diun_key.der cert_path: /etc/fdo/keys/diun_cert.pem rendezvous_info: - deviceport: 8082 ip_address: 192.168.122.99 ownerport: 8082 protocol: http manufacturing: manufacturer_cert_path: /etc/fdo/keys/manufacturer_cert.pem device_cert_ca_private_key: /etc/fdo/keys/device_ca_key.der device_cert_ca_chain: /etc/fdo/keys/device_ca_cert.pem owner_cert_path: /etc/fdo/keys/owner_cert.pem manufacturer_private_key: /etc/fdo/keys/manufacturer_key.der
Start the Manufacturing server.
Check if the systemd unit file are in the server:
# systemctl list-unit-files | grep fdo | grep manufacturing fdo-manufacturing-server.service disabled disabledEnable and start the manufacturing server.
# systemctl enable --now fdo-manufacturing-server.serviceOpen the default ports in your firewall:
# firewall-cmd --add-port=8080/tcp --permanent # systemctl restart firewalldEnsure that the service is listening on the
8080port:# ss -ltn
- Install RHEL for Edge onto your system by using the RHEL for Edge Simplified Installer image.
3.6. Installing, configuring, and running the Rendezvous server Copy linkLink copied to clipboard!
Install the fdo-rendezvous-server RPM package to enable the systems to receive the voucher generated by the Manufacturing server during the first device boot. The Rendezvous server then matches the device UUID with the target platform or cloud and informs the device about which Owner server endpoint the device must use.
Prerequisites
-
You created a
manufacturer_cert.pemcertificate. -
You copied the
manufacturer_cert.pemcertificate to the/etc/fdo/keysdirectory in the Rendezvous server.
Procedure
Install the
fdo-rendezvous-serverRPM packages:# dnf install -y fdo-rendezvous-serverCreate the
rendezvous-server.ymlconfiguration file, including the path to the manufacturer certificate. You can find an example in/usr/share/doc/fdo/rendezvous-server.yml. The following example shows a configuration file that is saved in/etc/fdo/rendezvous-server.yml.storage_driver: Directory: path: /etc/fdo/stores/rendezvous_registered session_store_driver: Directory: path: /etc/fdo/stores/rendezvous_sessions trusted_manufacturer_keys_path: /etc/fdo/keys/manufacturer_cert.pem max_wait_seconds: ~ bind: "0.0.0.0:8082"Check the Rendezvous server service status:
# systemctl list-unit-files | grep fdo | grep rende fdo-rendezvous-server.service disabled disabledIf the service is stopped and disabled, enable and start it:
# systemctl enable --now fdo-rendezvous-server.service
Check that the server is listening on the default configured port 8082:
# ss -ltnOpen the port if you have a firewall configured on this server:
# firewall-cmd --add-port=8082/tcp --permanent # systemctl restart firewalld
3.7. Installing, configuring, and running the Owner server Copy linkLink copied to clipboard!
Install the fdo-owner-cli and fdo-owner-onboarding-server RPM packages to enable systems to receive the voucher generated by the Manufacturing server during the first device boot. The Rendezvous server then matches the device UUID with the target platform or cloud and informs the device about which Owner server endpoint the device must use.
Prerequisites
- The device where the server will be deployed has a Trusted Platform Module (TPM) device to encrypt the disk. If not, you will get an error when booting the RHEL for Edge device.
-
You created the
device_ca_cert.pem,owner_key.der, andowner_cert.pemwith keys and certificates and copied them into the/etc/fdo/keysdirectory.
Procedure
Install the required RPMs in this server:
# dnf install -y fdo-owner-cli fdo-owner-onboarding-serverPrepare the
owner-onboarding-server.ymlconfiguration file and save it to the/etc/fdo/directory. Include the path to the certificates you already copied and information about where to publish the Owner server service in this file.The following is an example available in
/usr/share/doc/fdo/owner-onboarding-server.yml. You can find references to the Service Info API, such as the URL or the authentication token.--- ownership_voucher_store_driver: Directory: path: /etc/fdo/stores/owner_vouchers session_store_driver: Directory: path: /etc/fdo/stores/owner_onboarding_sessions trusted_device_keys_path: /etc/fdo/keys/device_ca_cert.pem owner_private_key_path: /etc/fdo/keys/owner_key.der owner_public_key_path: /etc/fdo/keys/owner_cert.pem bind: "0.0.0.0:8081" service_info_api_url: "http://localhost:8083/device_info" service_info_api_authentication: BearerToken: token: Kpt5P/5flBkaiNSvDYS3cEdBQXJn2Zv9n1D50431/lo= owner_addresses: - transport: http addresses: - ip_address: 192.168.122.149Create and configure the Service Info API.
Add the automated information for onboarding, such as user creation, files to be copied or created, commands to be executed, disk to be encrypted, and so on. Use the Service Info API configuration file example in
/usr/share/doc/fdo/serviceinfo-api-server.ymlas a template to create the configuration file under/etc/fdo/.--- service_info: initial_user: username: admin sshkeys: - "ssh-rsa AAAA...." files: - path: /root/resolv.conf source_path: /etc/resolv.conf commands: - command: touch args: - /root/test return_stdout: true return_stderr: true diskencryption_clevis: - disk_label: /dev/vda4 binding: pin: tpm2 config: "{}" reencrypt: true additional_serviceinfo: ~ bind: "0.0.0.0:8083" device_specific_store_driver: Directory: path: /etc/fdo/stores/serviceinfo_api_devices service_info_auth_token: Kpt5P/5flBkaiNSvDYS3cEdBQXJn2Zv9n1D50431/lo= admin_auth_token: zJNoErq7aa0RusJ1w0tkTjdITdMCWYkndzVv7F0V42Q=
Check the status of the systemd units:
# systemctl list-unit-files | grep fdo fdo-owner-onboarding-server.service disabled disabled fdo-serviceinfo-api-server.service disabled disabledIf the service is stopped and disabled, enable and start it:
# systemctl enable --now fdo-owner-onboarding-server.service # systemctl enable --now fdo-serviceinfo-api-server.serviceNoteYou must restart the
systemdservices every time you change the configuration files.
Check that the server is listening on the default configured port 8083:
# ss -ltnOpen the port if you have a firewall configured on this server:
# firewall-cmd --add-port=8081/tcp --permanent # firewall-cmd --add-port=8083/tcp --permanent # systemctl restart firewalld
3.8. Automating RHEL for Edge device onboarding with FDO Copy linkLink copied to clipboard!
Configure the FIDO Device Onboarding (FDO) authentication process to automatically onboard and provision a RHEL for Edge device during installation.
Prerequisites
-
You built a customized images by using
podman build. - Your device is assembled.
-
You installed the
fdo-manufacturing-serverRPM package. See Installing the manufacturing server package.
Procedure
- Start the installation process by booting the RHEL for Edge image on your device. You can install it from a CD-ROM or from a USB flash drive, for example.
Verify through the terminal that the device has reached the manufacturing service to perform the initial device credential exchange and has produced an ownership voucher.
You can find the ownership voucher at the storage location configured by the
ownership_voucher_store_driver:parameter at themanufacturing-sever.ymlfile.The directory should have an
ownership_voucherfile with a name in the GUID format which indicates that the correct device credentials were added to the device.The onboarding server uses the device credential to authenticate against the onboarding server. It then passes the configuration to the device. After the device receives the configuration from the onboarding server, it receives an SSH key and installs the operating system on the device. Finally, the system automatically reboots, encrypts it with a strong key stored at TPM.
Verification
After the device automatically reboots, you can log in to the device with the credentials that you created as part of the FDO process.
- Log in to the device by providing the username and password you created in the Service Info API.
3.9. Deploying an image mode for RHEL systems by using FDO Copy linkLink copied to clipboard!
You can automate the deployment of image mode for RHEL systems by using FIDO Device Onboarding (FDO) to deliver secure configuration. By embedding a Kickstart file within your ISO build, you can configure various parts of the installation process, such as user accounts, custom partitioning, and SSH keys.
If you use an ISO with a bootc container base image, bootc-image-builder automatically installs ostreecontainer, the command to install the container image. You can still configure anything, except the ostreecontainer command.
Prerequisites
- You have Podman installed on your host machine.
-
You have root access to run the
bootc-image-buildertool and run the containers in--privilegedmode. - You have FDO server infrastructure deployed.
Procedure
Create a Containerfile, for example:
FROM registry.redhat.io/rhel10/rhel-bootc:latest RUN dnf install -y fdo-init fdo-client RUN systemctl enable fdo-client-linuxapp.serviceCreate your Kickstart file. The following Kickstart file is an example of a fully unattended Kickstart file configuration that contains user creation and partition instructions.
[customizations.installer.kickstart] contents = """ text --non-interactive zerombr clearpart --all --initlabel --disklabel=gpt autopart --noswap --type=lvm user --name=test --groups=wheel --plaintext --password=test sshkey --username=test "ssh-ed25519 AAA..." network --bootproto=dhcp --device=link --activate --onboot=on poweroff %post export MANUFACTURING_SERVER_URL="http://192.168……" export DIUN_PUB_KEY_INSECURE="true" /usr/libexec/fdo/fdo-manufacturing-client %end """In the export <MANUFACTURING_SERVER_URL> field, replace the manufacturing server URL with your own manufacturing server URL.
-
Save the Kickstart configuration in the
.tomlformat to inject the Kickstart content. For example,config.toml. Create the following folder:
$ mkdir $(pwd)/output"Run
bootc-image-builder, and include the Kickstart file configuration that you want to add to the ISO build. Thebootc-image-builderautomatically adds theostreecontainercommand that installs the container image.$ sudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v /var/lib/containers/storage:/var/lib/containers/storage \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ registry.redhat.io/rhel10/bootc-image-builder:latest \ --type iso \ --config /config.toml \ quay.io/<namespace>/<image>:<tag>You can find the resulting
.isoimage in the output folder.