Composing, installing, and managing RHEL for Edge images
Creating, deploying, and managing Edge systems with Red Hat Enterprise Linux 9
Abstract
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introduction to RHEL for Edge images
A RHEL for Edge image is an rpm-ostree
image that includes system packages to remotely install RHEL on Edge servers.
The system packages include:
-
Base OS
package - Podman as the container engine
- Additional RPM Package Manager (RPM) content
RHEL for Edge is an immutable operating system that contains a read-only
root directory, and has following characteristics:
- The packages are isolated from the root directory.
- Each version of the operating system is a separate deployment. Therefore, you can roll back the system to a previous deployment when needed.
-
The
rpm-ostree
image offers efficient updates over the network. - RHEL for Edge supports multiple operating system branches and repositories.
-
The image contains a hybrid
rpm-ostree
package system.
You can compose customized RHEL for Edge images by using the RHEL image builder tool. You can also create RHEL for Edge images by accessing the edge management application in the Red Hat Hybrid Cloud Console platform and configure automated management.
Use the edge management application to simplify provisioning and registering your images. To learn more about the edge management, see the Create RHEL for Edge images and configure automated management documentation.
Using RHEL for Edge customized images that were created by using the RHEL image builder on-premise version is not supported in the edge management application. See Edge management supportability.
The edge management application supports building and managing only the edge-commit
and edge-installer
image types.
Additionally, you cannot use the FIDO Device Onboarding (FDO) process with images that you create by using the edge management application.
With a RHEL for Edge image, you can achieve the following benefits:
- Atomic upgrades
- You know the state of each update, and no changes are seen until you reboot your system.
- Custom health checks and intelligent rollbacks
- You can create custom health checks, and if a health check fails, the operating system rolls back to the previous stable state.
- Container-focused workflow
- The image updates are staged in the background, minimizing any workload interruptions to the system.
- Optimized Over-the-Air updates
- You can make sure that your systems are up-to-date, even with intermittent connectivity, thanks to efficient over-the-air (OTA) delta updates.
1.1. RHEL for Edge—supported architecture
Currently, you can deploy RHEL for Edge images on AMD and Intel 64-bit systems.
RHEL for Edge supports ARM systems on some devices. For more information about the supported devices, see Red Hat certified hardware.
1.2. RHEL for Edge image types and their deployments
Composing and deploying a RHEL for Edge image involves two phases:
-
Composing a RHEL
rpm-ostree
image using the RHEL image builder tool. You can access RHEL image builder through a command-line interface in thecomposer-cli
tool, or use a graphical user interface in the RHEL web console. - Deploying the image by using RHEL installer.
The image types vary in terms of their contents, and are therefore suitable for different types of deployment environments. While composing a RHEL for Edge image, you can select any of the following image types:
- RHEL for Edge Commit
-
This image type delivers atomic and safe updates to a system. The
edge-commit
(.tar
) image contains a full operating system, but it is not directly bootable. To boot theedge-commit
image type, you must deploy it by using one of the other disk image types. You can also buildedge-commit
images on the edge management application. - RHEL for Edge Container
-
This image type serves the OSTree commits by using an integrated HTTP server. The
edge-container
creates anOSTree
commit and embeds it into an OCI container with a web server. When theedge-commit
image starts, the web server serves the commit as an OSTree repository. - RHEL for Edge Installer
-
The
edge-installer
image type is an Anaconda-based installer image that deploys a RHEL for Edge OSTree commit that is embedded in the installer. Besides building.iso
images by using the RHEL image builder tool, you can also build RHEL for Edge installer (edge-installer
) on the edge management application. Theedge-installer
image type is an Anaconda-based installer image that deploys a RHEL for Edge ostree commit that is embedded in the installer image. - RHEL for Edge Raw Image
-
Use for bare metal platforms by flashing the RHEL Raw Images on a hard disk or boot the Raw image on a virtual machine. The
edge-raw-image
is a compressed raw images that consist of a file containing a partition layout with an existing deployed OSTree commit in it. - RHEL for Edge Simplified Installer
-
Use the
edge-simplified-installer
image type for unattended installations, where user configuration is provided via FDO or Ignition. Theedge-simplified-installer
image can use Ignition to inject the user configuration into the images at an early stage of the boot process. Additionally, it is possible to use FDO as a way to inject user configuration during an early stage of the boot process. After booting the Edge Simplified Installer, it provisions the RHEL for Edge image to a device with the injected user configuration. - RHEL for Edge AMI
-
Use this image to launch an EC2 instance in AWS cloud. The
edge-ami
image uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process. You can upload the.ami
image to AWS and boot an EC2 instance in AWS. - RHEL for Edge VMDK
-
Use this image to load the image on vSphere and boot the image in a vSphere VM. The
edge-vsphere
image uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process.
Image type | File type | Suitable for network-based deployments | Suitable for non-network-based deployments |
---|---|---|---|
RHEL for Edge Commit |
| Yes | No |
RHEL for Edge Container |
| No | Yes |
RHEL for Edge Installer |
| No | Yes |
RHEL for Edge Raw Image | .raw.xz | Yes | Yes |
RHEL for Edge Simplified Installer |
| Yes | Yes |
RHEL for Edge AMI |
| Yes | Yes |
RHEL for Edge VMDK |
| Yes | Yes |
Additional resources
1.3. Non-network-based deployments
Use RHEL image builder to create flexible RHEL rpm-ostree
images to suit your requirements, and then use Anaconda to deploy them in your environment.
You can access RHEL image builder through a command-line interface in the composer-cli
tool, or use a graphical user interface in the RHEL web console.
Composing and deploying a RHEL for Edge image in non-network-based deployments involves the following high-level steps:
- Install and register a RHEL system
- Install RHEL image builder
- Using RHEL image builder, create a blueprint with customizations for RHEL for Edge Container image
- Import the RHEL for Edge blueprint in RHEL image builder
- Create a RHEL for Edge image embed in an OCI container with a webserver ready to deploy the commit as an OSTree repository
- Download the RHEL for Edge Container image file
- Deploy the container serving a repository with the RHEL for Edge Container commit
- Using RHEL image builder, create another blueprint for RHEL for Edge Installer image
- Create a RHEL for Edge Installer image configured to pull the commit from the running container embedded with RHEL for Edge Container image
- Download the RHEL for Edge Installer image
- Run the installation
The following diagram represents the RHEL for Edge image non-network deployment workflow:
Figure 1.1. Deploying RHEL for Edge in non-network environment
1.4. Network-based deployments
Use RHEL image builder to create flexible RHEL rpm-ostree
images to suit your requirements, and then use Anaconda to deploy them in your environment. RHEL image builder automatically identifies the details of your deployment setup and generates the image output as an edge-commit
as a .tar
file.
You can access RHEL image builder through a command-line interface in the composer-cli
tool, or use a graphical user interface in the RHEL web console.
You can compose and deploy the RHEL for Edge image by performing the following high-level steps:
For an attended installation
- Install and register a RHEL system
- Install RHEL image builder
- Using RHEL image builder, create a blueprint for RHEL for Edge image
- Import the RHEL for Edge blueprint in RHEL image builder
-
Create a RHEL for Edge Commit (
.tar
) image - Download the RHEL for Edge image file
- On the same system where you have installed RHEL image builder, install a web server that you want to serve the RHEL for Edge Commit content. For instructions, see Setting up and configuring NGINX
-
Extract the RHEL for Edge Commit (
.tar
) content to the running web server - Create a Kickstart file that pulls the OSTree content from the running web server. For details on how to modify the Kickstart to pull the OSTree content, see Extracting the RHEL for Edge image commit
- Boot the RHEL installer ISO on the edge device and provide the Kickstart to it.
For an unattended installation, you can customize the RHEL installation ISO and embed the Kickstart file to it.
The following diagram represents the RHEL for Edge network image deployment workflow:
Figure 1.2. Deploying RHEL for Edge in network-base environment
1.5. Difference between RHEL RPM images and RHEL for Edge images
You can create RHEL system images in traditional package-based RPM format and also as RHEL for Edge (rpm-ostree
) images.
You can use the traditional package-based RPMs to deploy RHEL on traditional data centers. However, with RHEL for Edge images you can deploy RHEL on servers other than traditional data centers. These servers include systems where processing of large amounts of data is done closest to the source where data is generated, the Edge servers.
The RHEL for Edge (rpm-ostree
) images are not a package manager. They only support complete bootable file system trees, not individual files. These images do not have information regarding the individual files such as how these files were generated or anything related to their origin.
The rpm-ostree
images need a separate mechanism, the package manager, to install additional applications in the /var
directory. With that, the rpm-ostree
image keeps the operating system unchanged, while maintaining the state of the /var
and /etc
directories. The atomic updates enable rollbacks and background staging of updates.
Refer to the following table to know how RHEL for Edge images differ from the package-based RHEL RPM images.
Key attributes | RHEL RPM image | RHEL for Edge image |
| You can assemble the packages locally to form an image. | The packages are assembled in an OSTree which you can install on a system. |
|
You can use |
You can use |
| The package contains DNF repositories | The package contains OSTree remote repository |
| Read write |
Read-only ( |
|
You can mount the image to any non |
|
Chapter 2. Setting up RHEL image builder
Use RHEL image builder to create your customized RHEL for Edge images. After you install RHEL image builder on a RHEL system, RHEL image builder is available as an application in RHEL web console. You can also access RHEL image builder through a command-line interface in the composer-cli
tool.
It is recommended to install RHEL image builder on a virtual machine.
2.1. Image builder system requirements
The environment where RHEL image builder runs, for example a virtual machine, must meet the requirements that are listed in the following table.
Running RHEL image builder inside a container is not supported.
Parameter | Minimal Required Value |
System type | A dedicated virtual machine |
Processor | 2 cores |
Memory | 4 GiB |
Disk space | 20 GiB |
Access privileges | Administrator level (root) |
Network | Connectivity to Internet |
The 20 GiB disk space requirement is enough to install and run RHEL image builder in the host. To build and deploy image builds, you must allocate additional dedicated disk space.
2.2. Installing RHEL image builder
To install RHEL image builder on a dedicated virtual machine, follow these steps:
Prerequisites
- The virtual machine is created and is powered on.
- You have installed RHEL and you have subscribed to RHSM or Red Hat Satellite.
-
You have enabled the
BaseOS
andAppStream
repositories to be able to install the RHEL image builder packages.
Procedure
Install the following packages on the virtual machine.
- osbuild-composer
- composer-cli
- cockpit-composer
- bash-completion
- firewalld
# dnf install osbuild-composer composer-cli cockpit-composer bash-completion firewalld
RHEL image builder is installed as an application in RHEL web console.
- Reboot the virtual machine
Configure the system firewall to allow access to the web console:
# firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent
Enable RHEL image builder.
# systemctl enable osbuild-composer.socket cockpit.socket --now
The osbuild-composer and cockpit services start automatically on first access.
Load the shell configuration script so that the autocomplete feature for the
composer-cli
command starts working immediately without reboot:$ source /etc/bash_completion.d/composer-cli
Additional resources
Chapter 3. Configuring RHEL image builder repositories
To use RHEL image builder, you must ensure that the repositories are configured. You can use the following types of repositories in RHEL image builder:
- Official repository overrides
- Use these if you want to download base system RPMs from elsewhere than the Red Hat Content Delivery Network (CDN) official repositories, for example, a custom mirror in your network. Using official repository overrides disables the default repositories, and your custom mirror must contain all the necessary packages.
- Custom third-party repositories
- Use these to include packages that are not available in the official RHEL repositories.
3.1. Adding custom third-party repositories to RHEL image builder
You can add custom third-party sources to your repositories and manage these repositories by using the composer-cli
.
Prerequisites
- You have the URL of the custom third-party repository.
Procedure
Create a repository source file, such as
/root/repo.toml
. For example:id = "k8s" name = "Kubernetes" type = "yum-baseurl" url = "https://server.example.com/repos/company_internal_packages/" check_gpg = false check_ssl = false system = false
The
type
field accepts the following valid values:yum-baseurl
,yum-mirrorlist
, andyum-metalink
.- Save the file in the TOML format.
Add the new third-party source to RHEL image builder:
$ composer-cli sources add <file-name>.toml
Verification
Check if the new source was successfully added:
$ composer-cli sources list
Check the new source content:
$ composer-cli sources info <source_id>
3.2. Adding third-party repositories with specific distributions to RHEL image builder
You can specify a list of distributions in the custom third-party source file by using the optional field distro
. The repository file uses the distribution string list while resolving dependencies during the image building.
Any request that specifies rhel-9
uses this source. For example, if you list packages and specify rhel-9
, it includes this source. However, listing packages for the host distribution do not include this source.
Prerequisites
- You have the URL of the custom third-party repository.
- You have the list of distributions that you want to specify.
Procedure
Create a repository source file, such as
/root/repo.toml
. For example, to specify the distribution:check_gpg = true check_ssl = true distros = ["rhel-9"] id = "rh9-local" name = "packages for RHEL" system = false type = "yum-baseurl" url = "https://local/repos/rhel9/projectrepo/"
- Save the file in the TOML format.
Add the new third-party source to RHEL image builder:
$ composer-cli sources add <file-name>.toml
Verification
Check if the new source was successfully added:
$ composer-cli sources list
Check the new source content:
$ composer-cli sources info <source_id>
3.3. Checking repositories metadata with GPG
To detect and avoid corrupted packages, you can use the DNF package manager to check the GNU Privacy Guard (GPG) signature on RPM packages, and also to check if the repository metadata has been signed with a GPG key.
You can either enter the gpgkey
that you want to do the check over https
by setting the gpgkeys
field with the key URL. Alternatively, to improve security, you can also embed the whole key into the gpgkeys
field, to import it directly instead of fetching the key from the URL.
Prerequisites
- The directory that you want to use as a repository exists and contains packages.
Procedure
Access the folder where you want to create a repository:
$ cd repo/
Run the
createrepo_c
to create a repository from RPM packages:$ createrepo_c .
Access the directory where the repodata is:
$ cd repodata/
Sign your
repomd.xml
file:$ gpg -u <_gpg-key-email_> --yes --detach-sign --armor /srv/repo/example/repomd.xml
To enable GPG signature checks in the repository:
-
Set
check_repogpg = true
in the repository source. Enter the
gpgkey
that you want to do the check. If your key is available overhttps
, set thegpgkeys
field with the key URL for the key. You can add as many URL keys as you need.The following is an example:
check_gpg = true check_ssl = true id = "signed local packages" name = "repository_name" type = "yum-baseurl" url = "https://local/repos/projectrepo/" check_repogpg = true gpgkeys=["https://local/keys/repokey.pub"]
As an alternative, add the GPG key directly in the
gpgkeys
field, for example:check_gpg = true check_ssl = true check_repogpg id = "custom-local" name = "signed local packages" type = "yum-baseurl" url = "https://local/repos/projectrepo/" gpgkeys=["https://remote/keys/other-repokey.pub", '''-----BEGIN PGP PUBLIC KEY BLOCK----- … -----END PGP PUBLIC KEY BLOCK-----''']
If the test does not find the signature, the GPG tool shows an error similar to the following one:
$ GPG verification is enabled, but GPG signature is not available. This may be an error or the repository does not support GPG verification: Status code: 404 for http://repo-server/rhel/repodata/repomd.xml.asc (IP: 192.168.1.3)
If the signature is invalid, the GPG tool shows an error similar to the following one:
repomd.xml GPG signature verification error: Bad GPG signature
-
Set
Verification
Test the signature of the repository manually:
$ gpg --verify /srv/repo/example/repomd.xml.asc
3.4. RHEL image builder official repository overrides
RHEL image builder osbuild-composer
back end does not inherit the system repositories located in the /etc/yum.repos.d/
directory. Instead, it has its own set of official repositories defined in the /usr/share/osbuild-composer/repositories
directory. This includes the Red Hat official repository, which contains the base system RPMs to install additional software or update already installed programs to newer versions. If you want to override the official repositories, you must define overrides in /etc/osbuild-composer/repositories/
. This directory is for user defined overrides and the files located there take precedence over those in the /usr/share/osbuild-composer/repositories/
directory.
The configuration files are not in the usual DNF repository format known from the files in /etc/yum.repos.d/
. Instead, they are JSON files.
3.5. Overriding a system repository
You can configure your own repository override for RHEL image builder in the /etc/osbuild-composer/repositories
directory.
Prerequisites
- You have a custom repository that is accessible from your host system.
Procedure
Create the
/etc/osbuild-composer/repositories/
directory to store your repository overrides:$ sudo mkdir -p /etc/osbuild-composer/repositories
Create a JSON file, using a name corresponding to your RHEL version. Alternatively, you can copy the file for your distribution from
/usr/share/osbuild-composer/
and modify its content.For RHEL 9.3, use
/etc/osbuild-composer/repositories/rhel-93.json
.Add the following structure to your JSON file. Specify only one of the following attributes, in the string format:
-
baseurl
- The base URL of the repository. -
metalink
- The URL of a metalink file that contains a list of valid mirror repositories. mirrorlist
- The URL of a mirrorlist file that contains a list of valid mirror repositories. The remaining fields, such asgpgkey
, andmetadata_expire
, are optional.For example:
{ "x86_64": [ { "name": "baseos", "baseurl": "http://mirror.example.com/composes/released/RHEL-9/9.0/BaseOS/x86_64/os/", "gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\n (…)", "check_gpg": true } ] }
Alternatively, you can copy the JSON file for your distribution, by replacing
rhel-version.json
with your RHEL version, for example: rhel-9.json.$ cp /usr/share/osbuild-composer/repositories/rhel-version.json /etc/osbuild-composer/repositories/
-
Optional: Verify the JSON file:
$ json_verify < /etc/osbuild-composer/repositories/rhel-version.json
Edit the
baseurl
paths in therhel-9.json
file and save it. For example:$ /etc/osbuild-composer/repositories/rhel-version.json
Restart the
osbuild-composer.service
:$ sudo systemctl restart osbuild-composer.service
Verification
Check if the repository points to the correct URLs:
$ cat /etc/yum.repos.d/redhat.repo
You can see that the repository points to the correct URLs which are copied from the
/etc/yum.repos.d/redhat.repo
file.
Additional resources
-
The latest RPMs version available in repository not visible for
osbuild-composer
. (Red Hat Knowledgebase)
3.6. Overriding a system repository that requires subscriptions
You can set up the osbuild-composer
service to use system subscriptions that are defined in the /etc/yum.repos.d/redhat.repo
file. To use a system subscription in osbuild-composer
, define a repository override that has the following details:
-
The same
baseurl
as the repository defined in/etc/yum.repos.d/redhat.repo
. The value of
”rhsm”: true
defined in the JSON object.Noteosbuild-composer
does not automatically use repositories defined in/etc/yum.repos.d/
. You need to manually specify them either as a system repository override or as an additionalsource
by usingcomposer-cli
. The “BaseOS” and “AppStream” repositories usually use system repository overrides, whereas all the other repositories usecomposer-cli
sources.
Prerequisites
-
Your system has a subscription defined in
/etc/yum.repos.d/redhat.repo
- You have created a repository override. See Overriding a system repository.
Procedure
Get the
baseurl
from the/etc/yum.repos.d/redhat.repo
file:# cat /etc/yum.repos.d/redhat.repo [AppStream] name = AppStream mirror example baseurl = https://mirror.example.com/RHEL-9/9.0/AppStream/x86_64/os/ enabled = 1 gpgcheck = 0 sslverify = 1 sslcacert = /etc/pki/ca1/ca.crt sslclientkey = /etc/pki/ca1/client.key sslclientcert = /etc/pki/ca1/client.crt metadata_expire = 86400 enabled_metadata = 0
Configure the repository override to use the same
baseurl
and setrhsm
to true:{ "x86_64": [ { "name": "AppStream mirror example", "baseurl": "https://mirror.example.com/RHEL-9/9.0/AppStream/x86_64/os/", "gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\n (…)", "check_gpg": true, "rhsm": true } ] }
Restart the
osbuild-composer.service
:$ sudo systemctl restart osbuild-composer.service
Additional resources
- RHEL image builder uses CDN repositories when host is registered to Satellite 6 (Red Hat Knowledgebase)
3.7. Configuring and using Satellite CV as a content source
You can use Satellite’s content views (CV) as repositories to build images with RHEL image builder. To do so, on your host registered to Satellite, manually configure the repository references so that you can retrieve from the Satellite repositories, instead of the Red Hat Content Delivery Network (CDN) official repositories.
Prerequisites
- You are using RHEL image builder on a host registered to Satellite 6.
Procedure
Find the repository URL from your currently configured repositories:
$ sudo yum -v repolist rhel-8-for-x86_64-baseos-rpms | grep repo-baseurl Repo-baseurl :
The following output is an example:
https://satellite6.example.com/pulp/content/YourOrg/YourEnv/YourCV/content/dist/rhel8/8/x86_64/baseos/os
Modify the hard-coded repositories to a Satellite Server.
Create a repository directory with the
0755
permission:$ sudo mkdir -pvm 0755 /etc/osbuild-composer/repositories
Copy the content from
/usr/share/osbuild-composer/repositories/*.json
to the directory that you created:$ sudo cp /usr/share/osbuild-composer/repositories/*.json /etc/osbuild-composer/repositories/
Update the Satellite URL and the file contents through the
/content/dist/*
line:$ sudo sed -i -e 's|cdn.redhat.com|satellite6.example.com/pulp/content/YourOrg/YourEnv/YourCV|' /etc/osbuild-composer/repositories/*.json
Verify that the configuration was correctly replaced:
$ sudo vi /etc/osbuild-composer/repositories/rhel-8.json
Restart the services:
$ sudo systemctl restart osbuild-worker@1.service osbuild-composer.service
- Override the required system repository in Red Hat image builder configuration and use the URL of your Satellite repository as a baseurl. See Overriding a system repository.
Additional resources
- Composer RHEL image builder fails when multiple custom repositories are defined on the Satellite (Red Hat Knowledgebase)
3.8. Using Satellite CV as repositories to build images in RHEL image builder
Configure RHEL image builder to use Satellite’s content views (CV) as repositories to build your custom images.
Prerequisites
- You have integrated Satellite with RHEL web console. See Enabling the RHEL web console on Satellite
Procedure
- In the Satellite web UI, navigate to Content > Products, select your Product and click the repository you want to use.
- Search for the secured URL (HTTPS) in the Published field and copy it.
- Use the URL that you copied as a baseurl for the Red Hat image builder repository. See Adding custom third-party repositories to RHEL image builder.
Next steps
- Build the image. See Creating a system image by using RHEL image builder in the web console interface.
Chapter 4. Composing a RHEL for Edge image using image builder in RHEL web console
Use RHEL image builder to create a custom RHEL for Edge image (OSTree commit).
To access RHEL image builder and to create your custom RHEL for Edge image, you can either use the RHEL web console interface or the command-line interface.
You can compose RHEL for Edge images using RHEL image builder in RHEL web console by performing the following high-level steps:
- Access RHEL image builder in RHEL web console
- Create a blueprint for RHEL for Edge image.
Create a RHEL for Edge image. You can create the following images:
- RHEL for Edge Commit image.
- RHEL for Edge Container image.
- RHEL for Edge Installer image.
- Download the RHEL for Edge image
4.1. Accessing RHEL image builder in the RHEL web console
To access RHEL image builder in RHEL web console, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have installed a RHEL system.
- You have administrative rights on the system.
- You have subscribed the RHEL system to Red Hat Subscription Manager (RHSM) or to Red Hat Satellite Server.
- The system is powered on and accessible over network.
- You have installed RHEL image builder on the system.
Procedure
- On your RHEL system, access https://localhost:9090/ in a web browser.
- For more information about how to remotely access RHEL image builder, see Managing systems using the RHEL 9 web console document.
- Log in to the web console using an administrative user account.
- On the web console, in the left hand menu, click .
Click
.The RHEL image builder dashboard opens in the right pane. You can now proceed to create a blueprint for the RHEL for Edge images.
4.2. Creating a blueprint for a RHEL for Edge image using image builder in the web console
To create a blueprint for a RHEL for Edge image by using RHEL image builder in RHEL web console, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- On a RHEL system, you have opened the RHEL image builder dashboard.
Procedure
On the RHEL image builder dashboard, click
.The Create Blueprint dialogue box opens.
On the
Details
page:- Enter the name of the blueprint and, optionally, its description. Click .
Optional: In the
Packages
page:On the
Available packages
search, enter the package name and click the button to move it to the Chosen packages field. Search and include as many packages as you want. Click .NoteThese customizations are all optional unless otherwise specified.
-
On the
Kernel
page, enter a kernel name and the command-line arguments. -
On the
File system
page, selectUse automatic partitioning
. OSTree systems do not support filesystem customization, because OSTree images have their own mount rule, such as read-only. Click . On the
Services
page, you can enable or disable services:- Enter the service names you want to enable or disable, separating them by a comma, by space, or by pressing the key. Click .
On the
Firewall
page, set up your firewall setting:-
Enter the
Ports
, and the firewall services you want to enable or disable. - Click the button to manage your firewall rules for each zone independently. Click .
-
Enter the
On the
Users
page, add a users by following the steps:- Click .
-
Enter a
Username
, apassword
, and aSSH key
. You can also mark the user as a privileged user, by clicking theServer administrator
checkbox. Click .
On the
Groups
page, add groups by completing the following steps:Click the
button:-
Enter a
Group name
and aGroup ID
. You can add more groups. Click .
-
Enter a
On the
SSH keys
page, add a key:Click the
button.- Enter the SSH key.
-
Enter a
User
. Click .
On the
Timezone
page, set your timezone settings:On the
Timezone
field, enter the timezone you want to add to your system image. For example, add the following timezone format: "US/Eastern".If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default.
-
Enter the
NTP
servers. Click .
On the
Locale
page, complete the following steps:-
On the
Keyboard
search field, enter the package name you want to add to your system image. For example: ["en_US.UTF-8"]. -
On the
Languages
search field, enter the package name you want to add to your system image. For example: "us". Click .
-
On the
On the
Others
page, complete the following steps:-
On the
Hostname
field, enter the hostname you want to add to your system image. If you do not add a hostname, the operating system determines the hostname. -
Mandatory only for the Simplifier Installer image: On the
Installation Devices
field, enter a valid node for your system image. For example:dev/sda
. Click .
-
On the
Mandatory only when building FIDO images: On the
FIDO device onboarding
page, complete the following steps:On the
Manufacturing server URL
field, enter the following information:-
On the
DIUN public key insecure
field, enter the insecure public key. -
On the
DIUN public key hash
field, enter the public key hash. -
On the
DIUN public key root certs
field, enter the public key root certs. Click .
-
On the
On the
OpenSCAP
page, complete the following steps:-
On the
Datastream
field, enter thedatastream
remediation instructions you want to add to your system image. -
On the
Profile ID
field, enter theprofile_id
security profile you want to add to your system image. Click .
-
On the
Mandatory only when building Ignition images: On the
Ignition
page, complete the following steps:-
On the
Firstboot URL
field, enter the package name you want to add to your system image. -
On the
Embedded Data
field, drag or upload your file. Click .
-
On the
-
. On the
Review
page, review the details about the blueprint. Click .
The RHEL image builder view opens, listing existing blueprints.
4.3. Creating a RHEL for Edge Commit image by using image builder in web console
You can create a “RHEL for Edge Commit” image by using RHEL image builder in RHEL web console. The “RHEL for Edge Commit (.tar)” image type contains a full operating system, but it is not directly bootable. To boot the Commit image type, you must deploy it in a running container.
Prerequisites
- On a RHEL system, you have accessed the RHEL image builder dashboard.
Procedure
- On the RHEL image builder dashboard click .
On the Image output page, perform the following steps:
- From the Select a blueprint dropdown menu, select the blueprint you want to use.
- From the Image output type dropdown list, select “RHEL for Edge Commit (.tar)”.
- Click .
On the OSTree settings page, enter:
- Repository URL: specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/.
- Parent commit: specify a previous commit, or leave it empty if you do not have a commit at this time.
-
In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies
rhel/9/$ARCH/edge
. The "$ARCH" value is determined by the host machine. Click .
On the Review page, check the customizations and click .
RHEL image builder starts to create a RHEL for Edge Commit image for the blueprint that you created.
NoteThe image creation process takes up to 20 minutes to complete.
Verification
To check the RHEL for Edge Commit image creation progress:
- Click the tab.
After the image creation process is complete, you can download the resulting “RHEL for Edge Commit (.tar)” image.
Additional resources
4.4. Creating a RHEL for Edge Container image by using RHEL image builder in RHEL web console
You can create RHEL for Edge images by selecting “RHEL for Edge Container (.tar)”. The RHEL for Edge Container (.tar) image type creates an OSTree commit and embeds it into an OCI container with a web server. When the container is started, the web server serves the commit as an OSTree repository.
Follow the steps in this procedure to create a RHEL for Edge Container image using image builder in RHEL web console.
Prerequisites
- On a RHEL system, you have accessed the RHEL image builder dashboard.
- You have created a blueprint.
Procedure
- On the RHEL image builder dashboard click .
- On the Image output page, perform the following steps:
From the Select a blueprint dropdown menu, select the blueprint you want to use.
- From the Image output type dropdown list, select “RHEL for Edge Container (.tar)”.
- Click .
On the OSTree page, enter:
Repository URL: specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. By default, the repository folder for a RHEL for Edge Container image is "/repo".
To find the correct URL to use, access the running container and check the
nginx.conf
file. To find which URL to use, access the running container and check thenginx.conf
file. Inside thenginx.conf
file, find theroot
directory entry to search for the/repo/
folder information. Note that, if you do not specify a repository URL when creating a RHEL for Edge Container image(.tar)
by using RHEL image builder, the default/repo/
entry is created in thenginx.conf
file.- Parent commit: specify a previous commit, or leave it empty if you do not have a commit at this time.
-
In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies
rhel/9/$ARCH/edge
. The "$ARCH" value is determined by the host machine. Click .
- On the Review page, check the customizations. Click .
Click
.RHEL image builder starts to create a RHEL for Edge Container image for the blueprint that you created.
NoteThe image creation process takes up to 20 minutes to complete.
Verification
To check the RHEL for Edge Container image creation progress:
- Click the tab.
After the image creation process is complete, you can download the resulting “RHEL for Edge Container (.tar)” image.
Additional resources
4.5. Creating a RHEL for Edge Installer image by using image builder in RHEL web console
You can create RHEL for Edge Installer images by selecting RHEL for Edge Installer (.iso)
. The RHEL for Edge Installer (.iso)
image type pulls the OSTree commit repository from the running container served by the RHEL for Edge Container (.tar)
and creates an installable boot ISO image with a Kickstart file that is configured to use the embedded OSTree commit.
Follow the steps in this procedure to create a RHEL for Edge image using image builder in RHEL web console.
Prerequisites
- On a RHEL system, you have accessed the image builder dashboard.
- You created a blueprint.
- You created a RHEL for Edge Container image and loaded it into a running container. See Creating a RHEL for Edge Container image for non-network-based deployments.
Procedure
- On the RHEL image builder dashboard click .
On the Image output page, perform the following steps:
- From the Select a blueprint dropdown menu, select the blueprint you want to use.
-
From the Image output type dropdown list, select RHEL for Edge Installer (
.iso
) image. - Click .
On the OSTree settings page, enter:
- Repository URL: specify the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/.
-
In the Ref text box, specify a reference path for where your commit is going to be created. By default, the web console specifies
rhel/9/$ARCH/edge
. The "$ARCH" value is determined by the host machine. Click .
- On the Review page, check the customizations. Click .
Click
.RHEL image builder starts to create a RHEL for Edge Installer image for the blueprint that you created.
NoteThe image creation process takes up to 20 minutes to complete.
Verification
To check the RHEL for Edge Installer image creation progress:
- Click the tab.
After the image creation process is complete, you can download the resulting RHEL for Edge Installer (.iso)
image and boot the ISO image into a device.
Additional resources
4.6. Downloading a RHEL for Edge image
After you successfully create the RHEL for Edge image by using RHEL image builder, download the image on the local host.
Procedure
To download an image:
From the More Options menu, click .
The RHEL image builder tool downloads the file at your default download location.
The downloaded file consists of a .tar
file with an OSTree repository for RHEL for Edge Commit and RHEL for Edge Container images, or a .iso
file for RHEL for Edge Installer images, with an OSTree repository. This repository contains the commit and a json
file which contains information metadata about the repository content.
4.7. Additional resources
Chapter 5. Composing a RHEL for Edge image using image builder command-line
You can use image builder to create a customized RHEL for Edge image (OSTree commit).
To access image builder and to create your custom RHEL for Edge image, you can either use the RHEL web console interface or the command-line interface.
For Network-based deployments, the workflow to compose RHEL for Edge images using the CLI, involves the following high-level steps:
- Create a blueprint for RHEL for Edge image
- Create a RHEL for Edge Commit image
- Download the RHEL for Edge Commit image
For Non-Network-based deployments, the workflow to compose RHEL for Edge images using the CLI, involves the following high-level steps:
- Create a blueprint for RHEL for Edge image
- Create a blueprint for the RHEL for Edge Installer image
- Create a RHEL for Edge Container image
- Create a RHEL for Edge Installer image
- Download the RHEL for Edge image
To perform the steps, use the composer-cli
package.
To run the composer-cli
commands as non-root, you must be part of the weldr
group or you must have administrator access to the system.
5.1. Network-based deployments workflow
This provides steps on how to build OSTree
commits. These OSTree
commits contain a full operating system, but are not directly bootable. To boot them, you need to deploy them using a Kickstart file.
5.1.1. Creating a RHEL for Edge Commit image blueprint using image builder command-line interface
Create a blueprint for RHEL for Edge Commit image using the CLI.
Prerequisite
You do not have an existing blueprint. To verify that, list the existing blueprints:
$ sudo composer-cli blueprints list
Procedure
Create a plain text file in the TOML format, with the following content:
name = "blueprint-name" description = "blueprint-text-description" version = "0.0.1" modules = [ ] groups = [ ]
Where,
- blueprint-name is the name and blueprint-text-description is the description for your blueprint.
- 0.0.1 is the version number according to the Semantic Versioning scheme.
Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a".
Notice that currently there are no differences between packages and modules.
Groups are packages groups to be installed into the image, for example the group package anaconda-tools.
At this time, if you do not know the modules and groups, leave them empty.
Include the required packages and customize the other details in the blueprint to suit your requirements.
For every package that you want to include in the blueprint, add the following lines to the file:
[[packages]] name = "package-name" version = "package-version"
Where,
- package-name is the name of the package, such as httpd, gdb-doc, or coreutils.
package-version is the version number of the package that you want to use.
The package-version supports the following dnf version specifications:
- For a specific version, use the exact version number such as 9.0.
- For the latest available version, use the asterisk *.
- For the latest minor version, use formats such as 9.*.
Push (import) the blueprint to the RHEL image builder server:
# composer-cli blueprints push blueprint-name.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists.
# composer-cli blueprints show BLUEPRINT-NAME
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve blueprint-name
Additional resources
5.1.2. Creating a RHEL for Edge Commit image using image builder command-line interface
To create a RHEL for Edge Commit image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and follow the procedure.
Prerequisites
- You have created a blueprint for RHEL for Edge Commit image.
Procedure
Create the RHEL for Edge Commit image.
# composer-cli compose start blueprint-name image-type
Where,
- blueprint-name is the RHEL for Edge blueprint name.
image-type is
edge-commit
for network-based deployment.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
Check the image compose status.
# composer-cli compose status
The output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation process takes up to 20 minutes to complete.
To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
To delete an existing image, run:
# composer-cli compose delete <UUID>
After the image is ready, you can download it and use the image on your network deployments.
Additional resources
5.1.3. Creating a RHEL for Edge image update with a ref commit by using RHEL image builder CLI
If you performed a change in an existing blueprint, for example, you added a new package, and you want to update an existing RHEL for Edge image with this new package, you can use the --parent
argument to generate an updated RHEL for Edge Commit (.tar)
image. The --parent
argument can be either a ref
that exists in the repository specified by the URL
argument, or you can use the Commit ID
, which you can find in the extracted .tar
image file. Both ref
and Commit ID
arguments retrieve a parent for the new commit that you are building. RHEL image builder can read information from the parent commit that will affect parts of the new commit that you are building. As a result, RHEL image builder reads the parent commit’s user database and preserves UIDs and GIDs for the package-created system users and groups.
Prerequisites
- You have updated an existing blueprint for RHEL for Edge image.
- You have an existing RHEL for Edge image (OSTree commit). See Extracting RHEL for Edge image commit.
-
The
ref
being built is available at theOSTree
repository specified by the URL.
Procedure
Create the RHEL for Edge commit image:
# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --parent parent-OSTree-REF --url URL blueprint-name image-type
For example:
To create a new RHEL for Edge commit based on a
parent
and with a newref
, run the following command:# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --parent rhel/9/x86_64/edge --url http://10.0.2.2:8080/repo rhel_update edge-commit
To create a new RHEL for Edge commit based on the same
ref
, run the following command:# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url http://10.0.2.2:8080/repo rhel_update edge-commit
Where:
- The --ref argument specifies the same path value that you used to build an OSTree repository.
-
The --parent argument specifies the parent commit. It can be ref to be resolved and pulled, for example
rhel/9/x86_64/edge
, or theCommit ID
that you can find in the extracted.tar
file. - blueprint-name is the RHEL for Edge blueprint name.
-
The
--url
argument specifies the URL to the OSTree repository of the commit to embed in the image, for example, http://10.0.2.2:8080/repo. image-type is
edge-commit
for network-based deployment.Note-
The
--parent
argument can only be used for theRHEL for Edge Commit (.tar)
image type. Using the--url
and--parent
arguments together results in errors with theRHEL for Edge Container (.tar)
image type. -
If you omit the
parent ref
argument, the system falls back to theref
specified by the--ref
argument.
A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
-
The
Check the image compose status.
# composer-cli compose status
The output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation process takes a few minutes to complete.
(Optional) To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
(Optional) To delete an existing image, run:
# composer-cli compose delete <UUID>
After the image creation is complete, to upgrade an existing OSTree deployment, you need:
- Set up a repository. See Deploying a RHEL for Edge image .
- Add this repository as a remote, that is, the http or https endpoint that hosts the OSTree content.
- Pull the new OSTree commit onto their existing running instance. See Deploying RHEL for Edge image updates manually .
5.1.4. Downloading a RHEL for Edge image using the image builder command-line interface
To download a RHEL for Edge image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have created a RHEL for Edge image.
Procedure
Review the RHEL for Edge image status.
# composer-cli compose status
The output must display the following:
$ <UUID> FINISHED date blueprint-name blueprint-version image-type
Download the image.
# composer-cli compose image <UUID>
RHEL image builder downloads the image as a
tar
file to the current directory.The UUID number and the image size is displayed alongside.
$ <UUID>-commit.tar: size MB
The image contains a commit and a json
file with information metadata about the repository content.
Additional resources
5.2. Non-network-based deployments workflow
To build a boot ISO image that installs an OSTree-based system using the "RHEL for Edge Container" and the "RHEL for Edge Installer" images and that can be later deployed to a device in disconnected environments, follow the steps.
5.2.1. Creating a RHEL for Edge Container blueprint by using image builder CLI
To create a blueprint for RHEL for Edge Container image, perform the following steps:
Procedure
Create a plain text file in the TOML format, with the following content:
name = "blueprint-name" description = "blueprint-text-description" version = "0.0.1" modules = [ ] groups = [ ]
Where,
- blueprint-name is the name and blueprint-text-description is the description for your blueprint.
- 0.0.1 is the version number according to the Semantic Versioning scheme.
Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a".
Notice that currently there are no differences between packages and modules.
Groups are packages groups to be installed into the image, for example the group package anaconda-tools.
At this time, if you do not know the modules and groups, leave them empty.
Include the required packages and customize the other details in the blueprint to suit your requirements.
For every package that you want to include in the blueprint, add the following lines to the file:
[[packages]] name = "package-name" version = "package-version"
Where,
-
package-name is the name of the package, such as
httpd
,gdb-doc
, orcoreutils
. package-version is the version number of the package that you want to use.
The package-version supports the following
dnf
version specifications:- For a specific version, use the exact version number such as 9.0.
- For the latest available version, use the asterisk *.
- For the latest minor version, use formats such as 9.*.
-
package-name is the name of the package, such as
Push (import) the blueprint to the RHEL image builder server:
# composer-cli blueprints push blueprint-name.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists.
# composer-cli blueprints show BLUEPRINT-NAME
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve blueprint-name
Additional resources
5.2.2. Creating a RHEL for Edge Installer blueprint using image builder CLI
You can create a blueprint to build a RHEL for Edge Installer (.iso)
image, and specify user accounts to automatically create one or more users on the system at installation time.
When you create a user in the blueprint with the customizations.user
customization, the blueprint creates the user under the /usr/lib/passwd
directory and the password, under the /usr/etc/shadow
directory. Note that you cannot change the password in further versions of the image in a running system using OSTree
updates. The users you create with blueprints must be used only to gain access to the created system. After you access the system, you need to create users, for example, using the useradd
command.
To create a blueprint for RHEL for Edge Installer image, perform the following steps:
Procedure
Create a plain text file in the TOML format, with the following content:
name = "blueprint-installer" description = "blueprint-for-installer-image" version = "0.0.1" [[customizations.user]] name = "user" description = "account" password = "user-password" key = "user-ssh-key " home = "path" groups = ["user-groups"]
Where,
- blueprint-name is the name and blueprint-text-description is the description for your blueprint.
- 0.0.1 is the version number according to the Semantic Versioning scheme.
Push (import) the blueprint to the RHEL image builder server:
# composer-cli blueprints push blueprint-name.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists.
# composer-cli blueprints show blueprint-name
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve blueprint-name
Additional resources
5.2.3. Creating a RHEL for Edge Container image by using image builder CLI
To create a RHEL for Edge Container image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and follow the procedure.
Prerequisites
- You have created a blueprint for RHEL for Edge Container image.
Procedure
Create the RHEL for Edge Container image.
# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url URL-OSTree-repository blueprint-name image-type
Where,
-
--ref
is the same value that customer used to build OSTree repository --url
is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. By default, the repository folder for a RHEL for Edge Container image is "/repo". See Setting up a web server to install RHEL for Edge image.To find the correct URL to use, access the running container and check the
nginx.conf
file. To find which URL to use, access the running container and check thenginx.conf
file. Inside thenginx.conf
file, find theroot
directory entry to search for the/repo/
folder information. Note that, if you do not specify a repository URL when creating a RHEL for Edge Container image(.tar)
by using RHEL image builder, the default/repo/
entry is created in thenginx.conf
file.- blueprint-name is the RHEL for Edge blueprint name.
image-type is
edge-container
for non-network-based deployment.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
-
Check the image compose status.
# composer-cli compose status
The output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation process takes up to 20 minutes to complete.
To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
To delete an existing image, run:
# composer-cli compose delete <UUID>
After the image is ready, it can be used for non-network deployments. See Creating a RHEL for Edge Container image for non-network-based deployments.
Additional resources
5.2.4. Creating a RHEL for Edge Installer image using command-line interface for non-network-based deployments
To create a RHEL for Edge Installer image that embeds the OSTree
commit, use the RHEL image builder command-line interface, and ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have created a blueprint for RHEL for Edge Installer image.
- You have created a RHEL for Edge Container image and deployed it using a web server.
Procedure
Begin to create the RHEL for Edge Installer image.
# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url URL-OSTree-repository blueprint-name image-type
Where,
- ref is the same value that customer used to build the OSTree repository
- URL-OSTree-repository is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo. See Creating a RHEL for Edge Container image for non-network-based deployments.
- blueprint-name is the RHEL for Edge Installer blueprint name.
image-type is
edge-installer
.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
Check the image compose status.
# composer-cli compose status
The command output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation process takes a few minutes to complete.
To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
To delete an existing image, run:
# composer-cli compose delete <UUID>
After the image is ready, you can use it for non-network deployments. See Installing the RHEL for Edge image for non-network-based deployments.
5.2.5. Downloading a RHEL for Edge Installer image using the image builder CLI
To download a RHEL for Edge Installer image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have created a RHEL for Edge Installer image.
Procedure
Review the RHEL for Edge image status.
# composer-cli compose status
The output must display the following:
$ <UUID> FINISHED date blueprint-name blueprint-version image-type
Download the image.
# composer-cli compose image <UUID>
RHEL image builder downloads the image as an
.iso
file to the current directory.The UUID number and the image size is displayed alongside.
$ <UUID>-boot.iso: size MB
The resulting image is a bootable ISO image.
Additional resources
5.3. Supported image customizations
You can customize your image by adding customizations to your blueprint, such as:
- Adding an additional RPM package
- Enabling a service
- Customizing a kernel command line parameter.
Between others. You can use several image customizations within blueprints. By using the customizations, you can add packages and groups to the image that are not available in the default packages. To use these options, configure the customizations in the blueprint and import (push) it to RHEL image builder.
Additional resources
- Blueprint import fails after adding filesystem customization "size" (Red Hat Knowledgebase)
5.3.1. Selecting a distribution
You can use the distro
field to select the distribution to use when composing your images, or solving dependencies in the blueprint. If distro
is left blank it will use the host distribution. If you do not specify a distribution, the blueprint uses the host distribution. In case you upgrade the host operating system, the blueprints with no distribution set build images using the new operating system version. You cannot build an operating system image that differs from the RHEL image builder host.
Customize the blueprint with the RHEL distribution to always build the specified RHEL image:
name = "blueprint_name" description = "blueprint_version" version = "0.1" distro = "different_minor_version"
For example:
name = "tmux" description = "tmux image with openssh" version = "1.2.16" distro = "rhel-9.5"
Replace "different_minor_version"
to build a different minor version, for example, if you want to build a RHEL 9.5 image, use distro
= "rhel-94". On RHEL 9.3 image, you can build minor versions such as RHEL 9.3, RHEL 8.10 and earlier releases.
5.3.2. Selecting a package group
Customize the blueprint with package groups. The groups
list describes the groups of packages that you want to install into the image. The package groups are defined in the repository metadata. Each group has a descriptive name that is used primarily for display in user interfaces, and an ID that is commonly used in Kickstart files. In this case, you must use the ID to list a group. Groups have three different ways of categorizing their packages: mandatory, default, and optional. Only mandatory and default packages are installed in the blueprints. It is not possible to select optional packages.
The name
attribute is a required string and must match exactly the package group id in the repositories.
Currently, there are no differences between packages and modules in osbuild-composer
. Both are treated as an RPM package dependency.
Customize your blueprint with a package:
[[groups]] name = "group_name"
Replace
group_name
with the name of the group. For example,anaconda-tools
:[[groups]] name = "anaconda-tools"
5.3.3. Setting the image hostname
The customizations.hostname
is an optional string that you can use to configure the final image hostname. This customization is optional, and if you do not set it, the blueprint uses the default hostname.
Customize the blueprint to configure the hostname:
[customizations] hostname = "baseimage"
5.3.4. Specifying additional users
Add a user to the image, and optionally, set their SSH key. All fields for this section are optional except for the name
.
Procedure
Customize the blueprint to add a user to the image:
[[customizations.user]] name = "USER-NAME" description = "USER-DESCRIPTION" password = "PASSWORD-HASH" key = "PUBLIC-SSH-KEY" home = "/home/USER-NAME/" shell = "/usr/bin/bash" groups = ["users", "wheel"] uid = NUMBER gid = NUMBER
[[customizations.user]] name = "admin" description = "Administrator account" password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..." key = "PUBLIC SSH KEY" home = "/srv/widget/" shell = "/usr/bin/bash" groups = ["widget", "users", "wheel"] uid = 1200 gid = 1200 expiredate = 12345
The GID is optional and must already exist in the image. Optionally, a package creates it, or the blueprint creates the GID by using the
[[customizations.group]]
entry.Replace PASSWORD-HASH with the actual
password hash
. To generate thepassword hash
, use a command such as:$ python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass("Confirm: ")) else exit())'
Replace the other placeholders with suitable values.
Enter the
name
value and omit any lines you do not need.Repeat this block for every user to include.
5.3.5. Specifying additional groups
Specify a group for the resulting system image. Both the name
and the gid
attributes are mandatory.
Customize the blueprint with a group:
[[customizations.group]] name = "GROUP-NAME" gid = NUMBER
Repeat this block for every group to include. For example:
[[customizations.group]] name = "widget" gid = 1130
5.3.6. Setting SSH key for existing users
You can use customizations.sshkey
to set an SSH key for the existing users in the final image. Both user
and key
attributes are mandatory.
Customize the blueprint by setting an SSH key for existing users:
[[customizations.sshkey]] user = "root" key = "PUBLIC-SSH-KEY"
For example:
[[customizations.sshkey]] user = "root" key = "SSH key for root"
NoteYou can only configure the
customizations.sshkey
customization for existing users. To create a user and set an SSH key, see the Specifying additional users customization.
5.3.7. Appending a kernel argument
You can append arguments to the boot loader kernel command line. By default, RHEL image builder builds a default kernel into the image. However, you can customize the kernel by configuring it in the blueprint.
Append a kernel boot parameter option to the defaults:
[customizations.kernel] append = "KERNEL-OPTION"
For example:
[customizations.kernel] name = "kernel-debug" append = "nosmt=force"
5.3.8. Setting time zone and NTP
You can customize your blueprint to configure the time zone and the Network Time Protocol (NTP). Both timezone
and ntpservers
attributes are optional strings. If you do not customize the time zone, the system uses Universal Time, Coordinated (UTC). If you do not set NTP servers, the system uses the default distribution.
Customize the blueprint with the
timezone
and thentpservers
you want:[customizations.timezone] timezone = "TIMEZONE" ntpservers = "NTP_SERVER"
For example:
[customizations.timezone] timezone = "US/Eastern" ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
NoteSome image types, such as Google Cloud, already have NTP servers set up. You cannot override it because the image requires the NTP servers to boot in the selected environment. However, you can customize the time zone in the blueprint.
5.3.9. Customizing the locale settings
You can customize the locale settings for your resulting system image. Both language
and the keyboard
attributes are mandatory. You can add many other languages. The first language you add is the primary language and the other languages are secondary.
Procedure
Set the locale settings:
[customizations.locale] languages = ["LANGUAGE"] keyboard = "KEYBOARD"
For example:
[customizations.locale] languages = ["en_US.UTF-8"] keyboard = "us"
To list the values supported by the languages, run the following command:
$ localectl list-locales
To list the values supported by the keyboard, run the following command:
$ localectl list-keymaps
5.3.10. Customizing firewall
Set the firewall for the resulting system image. By default, the firewall blocks incoming connections, except for services that enable their ports explicitly, such as sshd
.
If you do not want to use the [customizations.firewall]
or the [customizations.firewall.services]
, either remove the attributes, or set them to an empty list []. If you only want to use the default firewall setup, you can omit the customization from the blueprint.
The Google and OpenStack templates explicitly disable the firewall for their environment. You cannot override this behavior by setting the blueprint.
Procedure
Customize the blueprint with the following settings to open other ports and services:
[customizations.firewall] ports = ["PORTS"]
Where
ports
is an optional list of strings that contain ports or a range of ports and protocols to open. You can configure the ports by using the following format:port:protocol
format. You can configure the port ranges by using theportA-portB:protocol
format. For example:[customizations.firewall] ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp", "30000-32767:tcp", "30000-32767:udp"]
You can use numeric ports, or their names from the
/etc/services
to enable or disable port lists.Specify which firewall services to enable or disable in the
customizations.firewall.service
section:[customizations.firewall.services] enabled = ["SERVICES"] disabled = ["SERVICES"]
You can check the available firewall services:
$ firewall-cmd --get-services
For example:
[customizations.firewall.services] enabled = ["ftp", "ntp", "dhcp"] disabled = ["telnet"]
NoteThe services listed in
firewall.services
are different from theservice-names
available in the/etc/services
file.
5.3.11. Enabling or disabling services
You can control which services to enable during the boot time. Some image types already have services enabled or disabled to ensure that the image works correctly and you cannot override this setup. The [customizations.services]
settings in the blueprint do not replace these services, but add the services to the list of services already present in the image templates.
Customize which services to enable during the boot time:
[customizations.services] enabled = ["SERVICES"] disabled = ["SERVICES"]
For example:
[customizations.services] enabled = ["sshd", "cockpit.socket", "httpd"] disabled = ["postfix", "telnetd"]
5.3.12. Specifying a custom file system configuration
You can specify a custom file system configuration in your blueprints and therefore create images with a specific disk layout, instead of the default layout configuration. By using the non-default layout configuration in your blueprints, you can benefit from:
- Security benchmark compliance
- Protection against out-of-disk errors
- Improved performance
- Consistency with existing setups
The OSTree systems do not support the file system customizations, because OSTree images have their own mount rule, such as read-only. The following image types are not supported:
-
image-installer
-
edge-installer
-
edge-simplified-installer
Additionally, the following image types do not support file system customizations, because these image types do not create partitioned operating system images:
-
edge-commit
-
edge-container
-
tar
-
container
However, the following image types have support for file system customization:
-
simplified-installer
-
edge-raw-image
-
edge-ami
-
edge-vsphere
With some additional exceptions for OSTree systems, you can choose arbitrary directory names at the /root
level of the file system, for example: ` /local`,` /mypartition`, /$PARTITION
. In logical volumes, these changes are made on top of the LVM partitioning system. The following directories are supported: /var
,` /var/log`, and /var/lib/containers
on a separate logical volume. The following are exceptions at root level:
- "/home": {Deny: true},
- "/mnt": {Deny: true},
- "/opt": {Deny: true},
- "/ostree": {Deny: true},
- "/root": {Deny: true},
- "/srv": {Deny: true},
- "/var/home": {Deny: true},
- "/var/mnt": {Deny: true},
- "/var/opt": {Deny: true},
- "/var/roothome": {Deny: true},
- "/var/srv": {Deny: true},
- "/var/usrlocal": {Deny: true},
For release distributions before RHEL 8.10 and 9.5, the blueprint supports the following mountpoints
and their sub-directories:
-
/
- the root mount point -
/var
-
/home
-
/opt
-
/srv
-
/usr
-
/app
-
/data
-
/tmp
From the RHEL 9.5 and 8.10 release distributions onward, you can specify arbitrary custom mountpoints, except for specific paths that are reserved for the operating system.
You cannot specify arbitrary custom mountpoints on the following mountpoints and their sub-directories:
-
/bin
-
/boot/efi
-
/dev
-
/etc
-
/lib
-
/lib64
-
/lost+found
-
/proc
-
/run
-
/sbin
-
/sys
-
/sysroot
-
/var/lock
-
/var/run
You can customize the file system in the blueprint for the /usr
custom mountpoint, but its subdirectory is not allowed.
Customizing mount points is only supported from RHEL 9.0 distributions onward, by using the CLI. In earlier distributions, you can only specify the root
partition as a mount point and specify the size
argument as an alias for the image size.
If you have more than one partition in the customized image, you can create images with a customized file system partition on LVM and resize those partitions at runtime. To do this, you can specify a customized file system configuration in your blueprint and therefore create images with the required disk layout. The default file system layout remains unchanged - if you use plain images without file system customization, and cloud-init
resizes the root partition.
The blueprint automatically converts the file system customization to an LVM partition.
You can use the custom file blueprint customization to create new files or to replace existing files. The parent directory of the file you specify must exist, otherwise, the image build fails. Ensure that the parent directory exists by specifying it in the [[customizations.directories]]
customization.
If you combine the files customizations with other blueprint customizations, it might affect the functioning of the other customizations, or it might override the current files customizations.
5.3.12.1. Specifying customized files in the blueprint
With the [[customizations.files]]
blueprint customization you can:
- Create new text files.
- Modifying existing files. WARNING: this can override the existing content.
- Set user and group ownership for the file you are creating.
- Set the mode permission in the octal format.
You cannot create or replace the following files:
-
/etc/fstab
-
/etc/shadow
-
/etc/passwd
-
/etc/group
You can create customized files and directories in your image, by using the [[customizations.files]]
and the [[customizations.directories]]
blueprint customizations. You can use these customizations only in the /etc
directory.
These blueprint customizations are supported by all image types, except the image types that deploy OSTree commits, such as edge-raw-image
, edge-installer
, and edge-simplified-installer
.
If you use the customizations.directories
with a directory path which already exists in the image with mode
, user
or group
already set, the image build fails to prevent changing the ownership or permissions of the existing directory.
5.3.12.2. Specifying customized directories in the blueprint
With the [[customizations.directories]]
blueprint customization you can:
- Create new directories.
- Set user and group ownership for the directory you are creating.
- Set the directory mode permission in the octal format.
- Ensure that parent directories are created as needed.
With the [[customizations.files]]
blueprint customization you can:
- Create new text files.
- Modifying existing files. WARNING: this can override the existing content.
- Set user and group ownership for the file you are creating.
- Set the mode permission in the octal format.
You cannot create or replace the following files:
-
/etc/fstab
-
/etc/shadow
-
/etc/passwd
-
/etc/group
The following customizations are available:
Customize the file system configuration in your blueprint:
[[customizations.filesystem]] mountpoint = "MOUNTPOINT" minsize = MINIMUM-PARTITION-SIZE
The
MINIMUM-PARTITION-SIZE
value has no default size format. The blueprint customization supports the following values and units: kB to TB and KiB to TiB. For example, you can define the mount point size in bytes:[[customizations.filesystem]] mountpoint = "/var" minsize = 1073741824
Define the mount point size by using units. For example:
[[customizations.filesystem]] mountpoint = "/opt" minsize = "20 GiB"
[[customizations.filesystem]] mountpoint = "/boot" minsize = "1 GiB"
Define the minimum partition by setting
minsize
. For example:[[customizations.filesystem]] mountpoint = "/var" minsize = 2147483648
Create customized directories under the
/etc
directory for your image by using[[customizations.directories]]
:[[customizations.directories]] path = "/etc/directory_name" mode = "octal_access_permission" user = "user_string_or_integer" group = "group_string_or_integer" ensure_parents = boolean
The blueprint entries are described as following:
-
path
- Mandatory - enter the path to the directory that you want to create. It must be an absolute path under the/etc
directory. -
mode
- Optional - set the access permission on the directory, in the octal format. If you do not specify a permission, it defaults to 0755. The leading zero is optional. -
user
- Optional - set a user as the owner of the directory. If you do not specify a user, it defaults toroot
. You can specify the user as a string or as an integer. -
group
- Optional - set a group as the owner of the directory. If you do not specify a group, it defaults toroot
. You can specify the group as a string or as an integer. -
ensure_parents
- Optional - Specify whether you want to create parent directories as needed. If you do not specify a value, it defaults tofalse
.
-
Create customized file under the
/etc
directory for your image by using[[customizations.directories]]
:[[customizations.files]] path = "/etc/directory_name" mode = "octal_access_permission" user = "user_string_or_integer" group = "group_string_or_integer" data = "Hello world!"
The blueprint entries are described as following:
-
path
- Mandatory - enter the path to the file that you want to create. It must be an absolute path under the/etc
directory. -
mode
Optional - set the access permission on the file, in the octal format. If you do not specify a permission, it defaults to 0644. The leading zero is optional. -
user
- Optional - set a user as the owner of the file. If you do not specify a user, it defaults toroot
. You can specify the user as a string or as an integer. -
group
- Optional - set a group as the owner of the file. If you do not specify a group, it defaults toroot
. You can specify the group as a string or as an integer. -
data
- Optional - Specify the content of a plain text file. If you do not specify a content, it creates an empty file.
-
5.4. Packages installed by RHEL image builder
When you create a system image by using RHEL image builder, the system installs a set of base package groups.
When you add additional components to your blueprint, ensure that the packages in the components you added do not conflict with any other package components. Otherwise, the system fails to solve dependencies and creating your customized image fails. You can check if there is no conflict between the packages by running the command:
# composer-cli blueprints depsolve BLUEPRINT-NAME
Image type | Default Packages |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Additional resources
Chapter 6. Building simplified installer images to provision a RHEL for Edge image
You can build a RHEL for Edge Simplified Installer image, which is optimized for unattended installation to a device, and provision the image to a RHEL for Edge image.
6.1. Simplified installer image build and deployment
Build a RHEL for Edge Simplified Installer image by using the edge-simplified-installer
image type,.
To build a RHEL for Edge Simplified Installer image, provide an existing OSTree
commit. The resulting simplified image contains a raw image that has the OSTree commit deployed. After you boot the Simplified installer ISO image, it provisions a RHEL for Edge system that you can use on a hard disk or as a boot image in a virtual machine. You can log in to the deployed system with the user name and password that you specified in the blueprint that you used to create the Simplified Installer image.
The RHEL for Edge Simplified Installer image is optimized for unattended installation to a device and supports both network-based deployment and non-network-based deployments. However, for network-based deployment, it supports only UEFI HTTP boot.
Composing and deploying a simplified RHEL for Edge image involves the following high-level steps:
- Install and register a RHEL system
- Install RHEL image builder
- Using RHEL image builder, create a blueprint with customizations for RHEL for Edge Container image
- Import the RHEL for Edge blueprint in RHEL image builder
- Create a RHEL for Edge image embed in an OCI container with a web server ready to deploy the commit as an OSTree repository
-
Create a blueprint for the
edge-simplified-installer
image - Build a simplified RHEL for Edge image
- Download the RHEL for Edge simplified image
-
Install the raw image with the
edge-simplified-installer
virt-install
The following diagram represents the RHEL for Edge Simplified building and provisioning workflow:
Figure 6.1. Building and provisioning RHEL for Edge in network-based environment
6.2. Creating a blueprint for a Simplified image using RHEL image builder CLI
To create a blueprint for a simplified RHEL for Edge image, you must customize it with a device file
location to enable an unattended installation to a device and a URL
to perform the initial device credential exchange. You also must specify users and user groups in the blueprint. For that,follow the steps:
Procedure
Create a plain text file in the Tom’s Obvious, Minimal Language (TOML) format, with the following content:
name = "simplified-installer-blueprint" description = "blueprint for the simplified installer image" version = "0.0.1" packages = [] modules = [] groups = [] distro = "" [customizations] installation_device = "/dev/vda" [[customizations.user]] name = "admin" password = "admin" groups = ["users", "wheel"] [customizations.fdo] manufacturing_server_url = "http://10.0.0.2:8080" diun_pub_key_insecure = "true"
NoteThe FDO customization in the blueprints is optional, and you can build your RHEL for Edge Simplified Installer image with no errors.
- name is the name and description is the description for your blueprint.
- 0.0.1 is the version number according to the Semantic Versioning scheme.
- Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Notice that currently there are no differences between packages and modules.
-
Groups are packages groups to be installed into the image, for example the
anaconda-tools
group package. If you do not know the modules and groups, leave them empty. - installation-device is the customization to enable an unattended installation to your device.
- manufacturing_server_url is the URL to perform the initial device credential exchange.
- name is the user name to login to the image.
- password is a password of your choice.
- groups are any user groups, such as "widget".
Push (import) the blueprint to the RHEL image builder server:
# composer-cli blueprints push blueprint-name.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists.
# composer-cli blueprints show blueprint-name
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve blueprint-name
Additional resources
6.3. Creating a RHEL for Edge Simplified Installer image using image builder CLI
To create a RHEL for Edge Simplified image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You created a blueprint for the RHEL for Edge Simplified image.
- You served an OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo. See Setting up a web server to install RHEL for Edge image.
Procedure
Create the bootable ISO image.
# composer-cli compose start-ostree \ blueprint-name \ edge-simplified-installer \ --ref rhel/9/x86_64/edge \ --url URL-OSTree-repository \
Where,
-
blueprint-name
is the RHEL for Edge blueprint name. -
edge-simplified-installer
is the image type . -
--ref
is the reference for where your commit is going to be created. --url
is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. You can either start a RHEL for Edge Container or set up a web server. See Creating a RHEL for Edge Container image for non-network-based deployments and Setting up a web server to install RHEL for Edge image.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
-
Check the image compose status.
# composer-cli compose status
The output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation processes can take up to ten minutes to complete.
To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
To delete an existing image, run:
# composer-cli compose delete <UUID>
Additional resources
6.4. Downloading a simplified RHEL for Edge image using the image builder command-line interface
To download a RHEL for Edge image by using RHEL image builder command-line interface, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have created a RHEL for Edge image.
Procedure
Review the RHEL for Edge image status.
# composer-cli compose status
The output must display the following:
$ <UUID> FINISHED date blueprint-name blueprint-version image-type
Download the image.
# composer-cli compose image <UUID>
RHEL image builder downloads the image as an
.iso
file at the current directory path where you run the command.The UUID number and the image size is displayed alongside.
$ <UUID>-simplified-installer.iso: size MB
As a result, you downloaded a RHEL for Edge Simplified Installer ISO image. You can use it directly as a boot ISO to install a RHEL for Edge system.
6.5. Creating a blueprint for a Simplified image RHEL using image builder GUI
To create a RHEL for Edge Simplified Installer image, you must create a blueprint and ensure that you customize it with:
- A device node location to enable an unattended installation to your device.
- A URL to perform the initial device credential exchange.
- A user or user group.
You can also add any other customizations that your image requires.
To create a blueprint for a simplified RHEL for Edge image in the RHEL image builder GUI, complete the following steps:
Prerequisites
- You have opened the image builder app from the web console in a browser. See Accessing the RHEL image builder GUI in the RHEL web console.
Procedure
Click Create Blueprint in the upper-right corner of the RHEL image builder app.
A dialog wizard with fields for the blueprint name and description opens.
On the
Details
page:- Enter the name of the blueprint and, optionally, its description. Click .
Optional: On the Packages page, complete the following steps:
In the
Available packages
search, enter the package name and click the button to move it to the Chosen packages field. Search and include as many packages as you want. Click .NoteThe customizations are all optional unless otherwise specified.
-
Optional: On the
Kernel
page, enter a kernel name and the command-line arguments. -
Optional: On the
File system
page, selectUse automatic partitioning
.The filesystem customization is not supported for OSTree systems, because OSTree images have their own mount rule, such as read-only. Click . Optional: On the
Services
page, you can enable or disable services:- Enter the service names you want to enable or disable, separating them by a comma, by space, or by pressing the key. Click .
Optional: On the
Firewall
page, set up your firewall setting:-
Enter the
Ports
, and the firewall services you want to enable or disable. - Click the button to manage your firewall rules for each zone independently. Click .
-
Enter the
On the
Users
page, add a users by following the steps:- Click .
Enter a
Username
, apassword
, and aSSH key
. You can also mark the user as a privileged user, by clicking theServer administrator
checkbox.NoteWhen you specify the user in the blueprint customization and then create an image from that blueprint, the blueprint creates the user under the
/usr/lib/passwd
directory and the password under the/usr/etc/shadow
during installation time. You can log in to the device with the username and password you created for the blueprint. After you access the system, you must create users, for example, using theuseradd
command.Click
.
Optional: On the
Groups
page, add groups by completing the following steps:Click the
button:-
Enter a
Group name
and aGroup ID
. You can add more groups. Click .
-
Enter a
Optional: On the
SSH keys
page, add a key:Click the
button.- Enter the SSH key.
-
Enter a
User
. Click .
Optional: On the
Timezone
page, set your timezone settings:On the
Timezone
field, enter the timezone you want to add to your system image. For example, add the following timezone format: "US/Eastern".If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default.
-
Enter the
NTP
servers. Click .
Optional: On the
Locale
page, complete the following steps:-
On the
Keyboard
search field, enter the package name you want to add to your system image. For example: ["en_US.UTF-8"]. -
On the
Languages
search field, enter the package name you want to add to your system image. For example: "us". Click .
-
On the
Mandatory: On the
Others
page, complete the following steps:-
In the
Hostname
field, enter the hostname you want to add to your system image. If you do not add a hostname, the operating system determines the hostname. -
Mandatory: In the
Installation Devices
field, enter a valid node for your system image to enable an unattended installation to your device. For example:dev/sda1
. Click .
-
In the
Optional: On the
FIDO device onboarding
page, complete the following steps:-
On the
Manufacturing server URL
field, enter themanufacturing server URL
to perform the initial device credential exchange, for example: "http://10.0.0.2:8080". The FDO customization in the blueprints is optional, and you can build your RHEL for Edge Simplified Installer image with no errors. -
On the
DIUN public key insecure
field, enter the certification public key hash to perform the initial device credential exchange. This field accepts "true" as value, which means this is an insecure connection to the manufacturing server. For example:manufacturing_server_url="http://${FDO_SERVER}:8080" diun_pub_key_insecure="true"
. You must use only one of these three options: "key insecure", "key hash" and "key root certs". On the
DIUN public key hash
field, enter the hashed version of your public key. For example:17BD05952222C421D6F1BB1256E0C925310CED4CE1C4FFD6E5CB968F4B73BF73
. You can get the key hash by generating it based on the certificate of the manufacturing server. To generate the key hash, run the command:# openssl x509 -fingerprint -sha256 -noout -in /etc/fdo/aio/keys/diun_cert.pem | cut -d"=" -f2 | sed 's/://g'
The
/etc/fdo/aio/keys/diun_cert.pem
is the certificate that is stored in the manufacturing server.On the
DIUN public key root certs
field, enter the public key root certs. This field accepts the content of the certification file that is stored in the manufacturing server. To get the content of certificate file, run the command:$ cat /etc/fdo/aio/keys/diun_cert.pem.
-
On the
- Click .
-
On the
Review
page, review the details about the blueprint. Click .
The RHEL image builder view opens, listing existing blueprints.
6.6. Creating a RHEL for Edge Simplified Installer image using image builder GUI
To create a RHEL for Edge Simplified image by using RHEL image builder GUI, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You opened the RHEL image builder app from the web console in a browser.
- You created a blueprint for the RHEL for Edge Simplified image.
-
You served an OSTree repository of the commit to embed in the image, for example,
http://10.0.2.2:8080/repo
. See Setting up a web server to install RHEL for Edge image. - The FDO manufacturing server is up and running.
Procedure
- Access mage builder dashboard.
- On the blueprint table, find the blueprint you want to build an image for.
-
Navigate to the
Images
tab and clickCreate Image
. TheCreate image
wizard opens. On the
Image output
page, complete the following steps:-
From the
Select a blueprint
list, select the blueprint you created for the RHEL for Edge Simplified image. -
From the
Image output type
list, selectRHEL for Edge Simplified Installer (.iso)
. -
In the
Image Size
field, enter the image size. Minimum image size required for Simplified Installer image is:
-
From the
- Click .
In the
OSTree settings
page, complete the following steps:-
In the
Repository URL
field, enter the repository URL to where the parent OSTree commit will be pulled from. -
In the
Ref
field, enter theref
branch name path. If you do not enter aref
, the defaultref
for the distro is used.
-
In the
-
On the
Review
page, review the image customization and click .
The image build starts and takes up to 20 minutes to complete. To stop the building, click
.6.7. Downloading a simplified RHEL for Edge image using the image builder GUI
To download a RHEL for Edge image by using RHEL image builder GUI, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- You have successfully created a RHEL for Edge image. See link.
Procedure
- Access the RHEL image builder dashboard. The blueprint list dashboard opens.
- In the blueprint table, find the blueprint you built your RHEL for Edge Simplified Installer image for.
-
Navigate to the
Images
tab. Choose one of the options:
- Download the image.
- Download the logs of the image to inspect the elements and verify if any issue is found.
You can use the RHEL for Edge Simplified Installer ISO image that you downloaded directly as a boot ISO to install a RHEL for Edge system.
6.8. Setting up an UEFI HTTP Boot server
To set up an UEFI HTTP Boot server, so that you can start to provision a RHEL for Edge Virtual Machine over network by connecting to this UEFI HTTP Boot server, follow the steps:
Prerequisites
- You have created the ISO simplified installer image.
- An http server that serves the ISO content.
Procedure
Mount the ISO image to the directory of your choice:
# mkdir /mnt/rhel9-install/ # mount -o loop,ro -t iso9660 /path_directory/installer.iso /mnt/rhel9-install/
Replace
/path_directory/installer.iso
with the path to the RHEL for Edge bootable ISO image.Copy the files from the mounted image to the HTTP server root. This command creates the
/var/www/html/rhel9-install/
directory with the contents of the image.# mkdir /var/www/html/httpboot/ # cp -R /mnt/rhel9-install/* /var/www/html/httpboot/ # chmod -R +r /var/www/html/httpboot/*
NoteSome copying methods can skip the
.treeinfo
file which is required for a valid installation source. Running thecp
command for whole directories as shown in this procedure will copy.treeinfo
correctly.Update the
/var/www/html/EFI/BOOT/grub.cfg
file, by replacing:-
coreos.inst.install_dev=/dev/sda
withcoreos.inst.install_dev=/dev/vda
-
linux /images/pxeboot/vmlinuz
withlinuxefi /images/pxeboot/vmlinuz
-
initrd /images/pxeboot/initrd.img
withinitrdefi /images/pxeboot/initrd.img
coreos.inst.image_file=/run/media/iso/disk.img.xz
withcoreos.inst.image_url=http://{IP-ADDRESS}/disk.img.xz
The IP-ADDRESS is the ip address of this machine, which will serve as a http boot server.
-
Start the httpd service:
# systemctl start httpd.service
As a result, after you set up an UEFI HTTP Boot server, you can install your RHEL for Edge devices by using UEFI HTTP boot.
6.9. Deploying the Simplified ISO image in a Virtual Machine
Deploy the RHEL for Edge ISO image you generated by creating a RHEL for Edge Simplified image by using any the following installation sources:
- UEFI HTTP Boot
- virt-install
This example shows how to create a virt-install installation source from your ISO image for a network-based installation .
Prerequisites
- You have created an ISO image.
- You set up a network configuration to support UEFI HTTP boot.
Procedure
- Set up a network configuration to support UEFI HTTP boot. See Setting up UEFI HTTP boot with libvirt.
Use the
virt-install
command to create a RHEL for Edge Virtual Machine from the UEFI HTTP Boot.# virt-install \ --name edge-install-image \ --disk path=” “, ,format=qcow2 --ram 3072 \ --memory 4096 \ --vcpus 2 \ --network network=integration,mac=mac_address \ --os-type linux --os-variant rhel9 \ --cdrom "/var/lib/libvirt/images/”ISO_FILENAME" --boot uefi,loader_ro=yes,loader_type=pflash,nvram_template=/usr/share/edk2/ovmf/OVMF_VARS.fd,loader_secure=no --virt-type kvm \ --graphics none \ --wait=-1 --noreboot
After you run the command, the Virtual Machine installation starts.
Verification
- Log in to the created Virtual Machine.
6.10. Deploying the Simplified ISO image from a USB flash drive
Deploy the RHEL for Edge ISO image you generated by creating a RHEL for Edge Simplified image by using an USB installation.
This example shows how to create a USB installation source from your ISO image.
Prerequisites
- You have created a simplified installer image, which is an ISO image.
- You have a 8 GB USB flash drive.
Procedure
- Copy the ISO image file to a USB flash drive.
- Connect the USB flash drive to the port of the computer you want to boot.
Boot the ISO image from the USB flash drive.The boot menu shows you the following options:
Install Red Hat Enterprise Linux 9 Test this media & install Red Hat Enterprise Linux 9
- Choose Install Red Hat Enterprise Linux 9. This starts the system installation.
Additional resources
6.11. Creating and booting a RHEL for Edge image in FIPS mode
You can create and boot a FIPS-enabled RHEL for Edge image. Before you compose the image, you must change the value of the fips
directive in your blueprint.
You can build the following image types in FIPS mode:
-
edge-installer
-
edge-simplified-installer
-
edge-raw-image
-
edge-ami
-
edge-vsphere
You can enable FIPS mode only during the image provisioning process. You cannot change to FIPS mode after the non-FIPS image build starts. If the host that builds the FIPS-enabled image is not FIPS-enabled, any keys generated by this host are not FIPS-compliant, but the resulting image is FIPS-compliant.
Prerequisites
- You created and downloaded a RHEL for Edge Container OSTree commit.
- You have installed Podman on your system. See the Red Hat Knowledgebase solution How to install Podman in RHEL.
-
You are logged in as the root user or a user who is a member of the
weldr
group.
Procedure
Create a plain text file in the Tom’s Obvious, Minimal Language (TOML) format with the following content:
name = "system-fips-mode-enabled" description = "blueprint with FIPS enabled " version = "0.0.1" [ostree] ref= "example/edge" url= "http://example.com/repo" [customizations] installation_device = "/dev/vda" fips = true [[customizations.user]] name = "admin" password = "admin" groups = ["users", "wheel"] [customizations.fdo] manufacturing_server_url = "https://fdo.example.com" diun_pub_key_insecure = true
Import the blueprint to the RHEL image builder server:
# composer-cli blueprints push <blueprint-name>.toml
List the existing blueprints to check whether the created blueprint is successfully imported and exists:
# composer-cli blueprints show <blueprint-name>
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve <blueprint-name>
-
Serve an OSTree repository of the commit to embed in the image, for example,
http://10.0.2.2:8080/repo
. For more information, see Setting up an UEFI HTTP Boot server. Create the bootable ISO image:
# composer-cli compose start-ostree \ <blueprint-name> \ edge-simplified-installer \ --ref rhel/8/x86_64/edge \ --url <URL-OSTree-repository> \
Review the RHEL for Edge image status:
# composer-cli compose status … $ <UUID> FINISHED <date> <blueprint-name> <blueprint-version> <image-type> …
Download the image:
# composer-cli compose image <UUID>
RHEL image builder downloads the image as an
.iso
file to the current directory path. The UUID number and the image size are displayed alongside:$ <UUID>-simplified-installer.iso: <size> MB
Create a RHEL for Edge virtual machine from the UEFI HTTP Boot server, for example:
# virt-install \ --name edge-device --disk path="/var/lib/libvirt/images/edge-device.qcow2",size=5,format=qcow2 \ --memory 4096 \ --vcpus 2 \ --network network=default \ --os-type linux \ --os-variant rhel8.9 \ --cdrom /var/lib/libvirt/images/<UUID>-simplified-installer.iso \ --boot uefi,loader.secure=false \ --virt-type kvm \ --graphics none \ --wait=-1 \ --noreboot
After you enter the command, the virtual machine installation starts.
Verification
- Log in to the created virtual machine with the username and password that you configured in your blueprint.
Check if FIPS mode is enabled:
$ cat /proc/sys/crypto/fips_enabled 1
Chapter 7. Building and provisioning a minimal raw image
The minimal-raw
image is a pre-packaged, bootable, minimal RPM image, compressed in the xz
format. The image consists of a file containing a partition layout with an existing deployed OSTree commit in it. You can build a RHEL for Edge Minimal Raw image type by using RHEL image builder and deploy the Minimal Raw image to the aarch64
and x86
architectures.
7.1. The minimal raw image build and deployment
Build a RHEL for Edge Minimal Raw image by using the minimal-raw
image type. To boot the image, you must decompress it and copy to any bootable device, such as an SD card or a USB flash drive. You can log in to the deployed system with the user name and password that you specified in the blueprint that you used to create the RHEL for Edge Minimal Raw image.
Composing and deploying a RHEL for Edge Minimal Raw image involves the following high-level steps:
- Install and register a RHEL system
- Install RHEL image builder
- Using RHEL image builder, create a blueprint with your customizations for RHEL for Edge Minimal Raw image
- Import the RHEL for Edge blueprint in RHEL image builder
- Create a RHEL for Edge Minimal Raw image
- Download and decompress the RHEL for Edge Minimal Raw image
- Create a bootable USB drive from the decompressed Raw image
- Deploy the RHEL for Edge Minimal Raw image
7.2. Creating the blueprint for a Minimal Raw image by using RHEL image builder CLI
Create a blueprint, and customize it with a username and a password. You can use the resulting blueprint to create a Minimal Raw image and log in to it by using the credentials that you configured in the blueprint.
Procedure
Create a plain text file in the Tom’s Obvious, Minimal Language (TOML) format, with the following content:
name = "minimal-raw-blueprint" description = "blueprint for the Minimal Raw image" version = "0.0.1" packages = [] modules = [] groups = [] distro = "" [[customizations.user]] name = "admin" password = "admin" groups = ["users", "wheel"]
- name is the name and description is the description for your blueprint.
- 0.0.1 is the version number according to the Semantic Versioning scheme.
- Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Currently there are no differences between packages and modules.
- Groups are packages groups to be installed into the image, for example the anaconda-tools group package. If you do not know the modules and groups, leave them empty.
Under
customizations.user
:-
name
is the username to login to the image -
password
is a password of your choice -
groups
are any user groups, such as "widget"
-
Import the blueprint to the RHEL image builder server:
# composer-cli blueprints push <blueprint_name>.toml
Check if the blueprint is available on the system:
# composer-cli blueprints list
Check the validity of components, versions, and their dependencies in the blueprint:
# composer-cli blueprints depsolve <blueprint_name>
Additional resources
7.3. Creating a Minimal Raw image by using RHEL image builder CLI
Create a RHEL for Edge Minimal Raw image with the RHEL image builder command-line interface.
Prerequisites
- You created a blueprint for the RHEL for Edge Minimal Raw image.
Procedure
Build the image.
# composer-cli compose start <blueprint_name> minimal-raw
-
<blueprint_name>
is the RHEL for Edge blueprint name -
minimal-raw
is the image type
-
Check the image compose status.
# composer-cli compose status
The output displays the status in the following format:
# <UUID> RUNNING date <blueprint_name> blueprint-version minimal-raw
Additional resources
7.4. Downloading and decompressing the Minimal Raw image
Download the RHEL for Edge Minimal Raw image by using RHEL image builder command-line interface, and then decompress the image to be able to boot it.
Prerequisites
- You have created a RHEL for Edge Minimal Raw image.
Procedure
Review the RHEL for Edge Minimal Raw image compose status.
# composer-cli compose status
The output must display the following details:
$ <UUID> FINISHED date <blueprint_name> <blueprint_version> minimal-raw
Download the image:
# composer-cli compose image <UUID>
Image builder downloads the image into your working directory. The following output is an example:
3f9223c1-6ddb-4915-92fe-9e0869b8e209-raw.img.xz
Decompress the image:
$ xz -d <UUID>-raw.img.xz
Next
Use the decompressed bootable RHEL for Edge Minimal Raw image to create a bootable installation medium and use it as a boot device. The following documentation describes the procedure of creating an USB bootable device from an ISO image. However, the same steps apply to the RAW images, because the RAW image is equivalent to the ISO image.
See Creating a bootable USB device on Linux for more details.
7.5. Deploying the Minimal Raw image from a USB flash drive
After you created a bootable USB installation medium from the customized RHEL for Edge Minimal Raw image, you can continue the installation process by deploying the Minimal Raw image from the USB flash drive and booting your customized image.
Prerequisites
- You have a 8 GB USB flash drive.
- You have created a bootable installation medium from the RHEL for Edge Minimal Raw image to the USB drive.
Procedure
- Connect the USB flash drive to the computer where you want to boot your customized image.
- Power on the system.
Boot the RHEL for Edge Minimal Raw image from the USB flash drive. The boot menu shows you the following options:
Install Red Hat Enterprise Linux 9 Test this media & install Red Hat Enterprise Linux 9
- Choose Install Red Hat Enterprise Linux 9. This starts the system installation.
Verification
Boot into the image by using the username and password you configured in the blueprint.
Check the release:
$ cat /etc/os-release
List the block devices in the system:
$ lsblk
7.6. Serving a RHEL for Edge Container image to build a RHEL for Edge Raw image
Create a RHEL for Edge Container image to serve it to a running container.
Prerequisites
- You have created a RHEL for Edge Minimal Raw image and downloaded it.
Procedure
Create a blueprint for the
rhel-edge-container
image type, for example:name = "rhel-edge-container-no-users" description = "" version = "0.0.1"
Build a
rhel-edge-container
image:# composer-cli compose start-ostree <rhel-edge-container-no-users> rhel-edge-container
Check if the image is ready:
# composer-cli compose status
Download the
rhel-edge-container
image as a.tar
file:# composer-cli compose image <UUID>
Import the RHEL for Edge Container into Podman:
$ skopeo copy oci-archive:_<UUID>_-container.tar \ containers-storage:localhost/rfe-93-mirror:latest
Start the container and make it available by using the port 8080:
$ podman run -d --rm --name <rfe-93-mirror> -p 8080:8080 localhost/<rfe-93-mirror>
Create a blueprint for the
edge-raw-image
image type, for example:name = "<edge-raw>" description = "" version = "0.0.1" [[customizations.user]] name = "admin" password = "admin" groups = ["wheel"]
Build a RHEL for Edge Raw image by serving the RHEL Edge Container to it:
# composer-cli compose start-ostree edge-raw edge-raw-image \ --url http://10.88.0.1:8080/repo
Download the RHEL for Edge Raw image as a
.raw
file:# composer-cli compose image <UUID>
Decompress the RHEL for Edge Raw image:
# xz --decompress <UUID>>-image.raw.xz
Chapter 8. Using the Ignition tool for the RHEL for Edge Simplified Installer images
RHEL for Edge uses the Ignition tool to inject the user configuration into the images at an early stage of the boot process. The user configuration that the Ignition tool injects includes:
- The user configuration.
-
Writing files, such as regular files, and
systemd
units.
On the first boot, Ignition reads its configuration either from a remote URL or a file embedded in the simplified installer ISO. Then, Ignition applies that configuration into the image.
8.1. Creating an Ignition configuration file
The Butane
tool is the preferred option to create an Ignition configuration file. Butane
consumes a Butane Config YAML
file and produces an Ignition Config
in the JSON format. The JSON file is used by a system on its first boot. The Ignition Config
applies the configuration in the image, such as user creation, and systemd
units installation.
Prerequisites
You have installed the Butane tool version v0.17.0:
$ sudo dnf/yum install -y butane
Procedure
Create a
Butane Config
file and save it in the.bu
format. You must specify thevariant
entry asr4e
, and theversion
entry as1.0.0
, for RHEL for Edge images. The butaner4e
variant on version 1.0.0 targets Ignition spec version3.3.0
. The following is a Butane Config YAML file example:variant: r4e version: 1.0.0 ignition: config: merge: - source: http://192.168.122.1:8000/sample.ign passwd: users: - name: core groups: - wheel password_hash: password_hash_here ssh_authorized_keys: - ssh-ed25519 some-ssh-key-here storage: files: - path: /etc/NetworkManager/system-connections/enp1s0.nmconnection contents: inline: | [connection] id=enp1s0 type=ethernet interface-name=enp1s0 [ipv4] address1=192.168.122.42/24,192.168.122.1 dns=8.8.8.8; dns-search= may-fail=false method=manual mode: 0600 - path: /usr/local/bin/startup.sh contents: inline: | #!/bin/bash echo "Hello, World!" mode: 0755 systemd: units: - name: hello.service contents: | [Unit] Description=A hello world [Install] WantedBy=multi-user.target enabled: true - name: fdo-client-linuxapp.service dropins: - name: log_trace.conf contents: | [Service] Environment=LOG_LEVEL=trace
Run the following command to consume the
Butane Config YAML
file and generate an Ignition Config in the JSON format:$ ./path/butane example.bu {"ignition":{"config":{"merge":[{"source":"http://192.168.122.1:8000/sample.ign"}]},"timeouts":{"httpTotal":30},"version":"3.3.0"},"passwd":{"users":[{"groups":["wheel"],"name":"core","passwordHash":"password_hash_here","sshAuthorizedKeys":["ssh-ed25519 some-ssh-key-here"]}]},"storage":{"files":[{"path":"/etc/NetworkManager/system-connections/enp1s0.nmconnection","contents":{"compression":"gzip","source":"data:;base64,H4sIAAAAAAAC/0yKUcrCMBAG3/csf/ObUKQie5LShyX5SgPNNiSr0NuLgiDzNMPM8VBFtHzoQjkxtPp+ITsrGLahKYyyGtoqEYNKwfeZc32OC0lKDb179rfg/HVyPgQ3hv8w/v0WT0k7T+7D/S1Dh7S4MRU5h1XyzqvsHVRg25G4iD5kp1cAAAD//6Cvq2ihAAAA"},"mode":384},{"path":"/usr/local/bin/startup.sh","contents":{"source":"data:;base64,IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8sIFdvcmxkISIK"},"mode":493}]},"systemd":{"units":[{"contents":"[Unit]\nDescription=A hello world\n[Install]\nWantedBy=multi-user.target","enabled":true,"name":"hello.service"},{"dropins":[{"contents":"[Service]\nEnvironment=LOG_LEVEL=trace\n","name":"log_trace.conf"}],"name":"fdo-client-linuxapp.service"}]}}
After you run the
Butane Config YAML
file to check and generate theIgnition Config JSON
file, you might get warnings when using unsupported fields, like partitions, for example. You can fix those fields and rerun the check.
You have now an Ignition JSON configuration file that you can use to customize your blueprint.
Additional resources
8.2. Creating a blueprint in the GUI with support to Ignition
When building a Simplified Installer image, you can customize your blueprint by entering the following details in the Ignition page of the blueprint:
-
Firstboot URL
- You must enter a URL that points to the Ignition configuration that will be fetched during the first boot. It can be used for both the raw image and simplified installer image. -
Embedded Data
- You must provide thebase64
encodedIgnition Configuration
file. It can be used only for the Simplified Installer image.
To customize your blueprint for a simplified RHEL for Edge image with support to Ignition configuration using the Ignition blueprint customization, follow the steps:
Prerequisites
- You have opened the image builder app from web console in a browser. See Accessing the image builder GUI in the RHEL web console.
-
To fully support the embedded section,
coreos-installer-dracut
has to be able to define-ignition-url
|-ignition-file
based on the presence of the OSBuild’s file.
Procedure
Click
in the upper-right corner.A dialog wizard with fields for the blueprint name and description opens.
On the
Details
page:- Enter the name of the blueprint and, optionally, its description. Click .
On the
Ignition
page, complete the following steps:-
On the
Firstboot URL
field, enter the URL that points to the Ignition configuration to be fetched during the first boot. -
On the
Embedded Data
field, drag or upload thebase64
encodedIgnition Configuration
file. Click .
-
On the
- Review the image details and click .
The image builder dashboard view opens, listing the existing blueprints.
Next
- You can use the blueprint you created to build your Simplified Installer image. See Creating a RHEL for Edge Simplified Installer image using image builder CLI.
8.3. Creating a blueprint with support to Ignition using the CLI
When building a simplified installer image, you can customize your blueprint by adding the customizations.ignition
section to it. With that, you can create either a simplified installer image or a raw image that you can use for the bare metal platforms. The customizations.ignition
customization in the blueprint enables the configuration files to be used in edge-simplified-installer
ISO and edge-raw-image
images.
For the
edge-simplified-installer
ISO image, you can customize the blueprint to embed an Ignition configuration file that will be included in the ISO image. For example:[customizations.ignition.embedded] config = "eyJ --- BASE64 STRING TRIMMED --- 19fQo="
You must provide a
base64
encoded Ignition configuration file.For both the
edge-simplified-installer
ISO image and also theedge-raw-image
, you can customize the blueprint, by defining a URL that will be fetched to obtain the Ignition configuration at the first boot. For example:[customizations.ignition.firstboot] url = "http://your_server/ignition_configuration.ig"
You must enter a URL that points to the Ignition configuration that will be fetched during the first boot.
To customize your blueprint for a Simplified RHEL for Edge image with support to Ignition configuration, follow the steps:
Prerequisites
-
If using the
[customizations.ignition.embedded]
customization, you must create an Ignition configuration file. -
If using the
[customizations.ignition.firstboot]
customization, you must have created a container whose URL points to the Ignition configuration that will be fetched during the first boot. -
The blueprint customization
[customizations.ignition.embedded]
section enablescoreos-installer-dracut
to define-ignition-url
|-ignition-file
based on the presence of the osbuild’s file.
Procedure
Create a plain text file in the Tom’s Obvious, Minimal Language (TOML) format, with the following content:
name = "simplified-installer-blueprint" description = "Blueprint with Ignition for the simplified installer image" version = "0.0.1" packages = [] modules = [] groups = [] distro = "" [customizations.ignition.embedded] config = "eyJ --- BASE64 STRING TRIMMED --- 19fQo="
Where:
-
The
name
is the name anddescription
is the description for your blueprint. -
The
version
is the version number according to the Semantic Versioning scheme. -
The
modules
andpackages
describe the package name and matching version glob to be installed into the image. For example, the packagename = "tmux"
and the matching version glob isversion = "3.3a"
. Notice that currently there are no differences between packages and modules. The
groups
are packages groups to be installed into the image. For examplegroups = "anaconda-tools"
group package. If you do not know the modules and groups, leave them empty.WarningIf you want to create a user with Ignition, you cannot use the FDO customizations to create a user at the same time. You can create users using Ignition and copy configuration files using FDO. But if you are creating users, create them using Ignition or FDO, but not both at the same time.
-
The
Push (import) the blueprint to the image builder server:
# composer-cli blueprints push blueprint-name.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists.
# composer-cli blueprints show blueprint-name
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve blueprint-name
Next
- You can use the blueprint you created to build your Simplified Installer image. See Creating a RHEL for Edge Simplified Installer image using image builder CLI.
Additional resources
Chapter 9. Creating VMDK images for RHEL for Edge
You can create a .vmdk
image for RHEL for Edge by using the RHEL image builder. You can create an edge-vsphere
image type with Ignition support, to inject the user configuration into the image at an early stage of the boot process. Then, you can load the image on vSphere and boot the image in a vSphere VM. The image is compatible with ESXi 7.0 U2, ESXi 8.0 and later. The vSphere VM is compatible with version 19 and 20.
9.1. Creating a blueprint with the Ignition configuration
Create a blueprint for the .vmdk
image and customize it with the customizations.ignition
section. With that, you can create your image and, at boot time, the operating system will inject the user configuration to the image.
Prerequisites
You have created an Ignition configuration file. For example:
{ "ignition":{ "version":"3.3.0" }, "passwd":{ "users":[ { "groups":[ "wheel" ], "name":"core", "passwordHash":"$6$jfuNnO9t1Bv7N" } ] } }
Procedure
Create a blueprint in the Tom’s Obvious, Minimal Language (TOML) format, with the following content:
name = "vmdk-image" description = "Blueprint with Ignition for the vmdk image" version = "0.0.1" packages = ["open-vm-tools"] modules = [] groups = [] distro = "" [[customizations.user]] name = "admin" password = "admin" groups = ["wheel"] [customizations.ignition.firstboot] url = http://<IP_address>:8080/config.ig
Where:
-
The
name
is the name anddescription
is the description for your blueprint. -
The
version
is the version number according to the Semantic Versioning scheme. -
The
modules
andpackages
describe the package name and matching version glob to be installed into the image. For example, the packagename = "open-vm-tools"
. Notice that currently there are no differences between packages and modules. -
The
groups
are packages groups to be installed into the image. For examplegroups = "anaconda-tools"
group package. If you do not know the modules and groups, leave them empty. -
The
customizations.user
creates a username and password to log in to the VM. The
customizations.ignition.firstboot
contains the URL where the Ignition configuration file is being served.NoteBy default, the
open-vm-tools
package is not included in theedge-vsphere
image. If you need this package, you must include it in the blueprint customization.
-
The
Import the blueprint to the image builder server:
# composer-cli blueprints push <blueprint-name>.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists:
# composer-cli blueprints show <blueprint-name>
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve <blueprint-name>
Next steps
-
Use the blueprint you created to build your
.vmdk
image.
9.2. Creating a VMDK image for RHEL for Edge
To create a RHEL for Edge .vmdk
image, use the 'edge-vsphere' image type in the RHEL image builder command-line interface.
Prerequisites
-
You created a blueprint for the
.vmdk
image. -
You served an OSTree repository of the commit to embed it in the image. For example,
http://10.0.2.2:8080/repo
. For more details, see Setting up a web server to install RHEL for Edge image.
Procedure
Start the compose of a
.vmdk
image:# composer-cli compose start start-ostree <blueprint-name> edge-vsphere --<url>
The --<url> is the URL of your repo, for example:
http://10.88.0.1:8080/repo
.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also, keep the UUID number handy for further tasks.
Check the image compose status:
# composer-cli compose status
The output displays the status in the following format:
$ <UUID> RUNNING date <blueprint-name> <blueprint-version> edge-vsphere
After the compose process finishes, download the resulting image file:
# composer-cli compose image <UUID>
Next steps
-
Upload the
.vmdk
image to vSphere.
9.3. Uploading VMDK images and creating a RHEL virtual machine in vSphere
Upload the .vmdk
image to VMware vSphere by using the govc import.vmdk
CLI tool and boot the image in a VM.
Prerequisites
-
You created an
.vmdk
image by using RHEL image builder and downloaded it to your host system. -
You installed the
govc import.vmdk
CLI tool. You configured the
govc import.vmdk
CLI tool client.You must set the following values in the environment:
GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK
Procedure
-
Navigate to the directory where you downloaded your
.vmdk
image. Launch the image on vSphere by executing the following steps:
Import the
.vmdk
image in to vSphere:$ govc import.vmdk ./composer-api.vmdk foldername
Create the VM in vSphere without powering it on:
govc vm.create \ -net="VM Network" -net.adapter=vmxnet3 \ -disk.controller=pvscsi -on=false \ -m=4096 -c=2 -g=rhel9_64Guest \ -firmware=efi vm_name govc vm.disk.attach \ -disk=”foldername/composer-api.vmdk” govc vm.power -on\ -vm vm_name -link=false \ vm_name
Power-on the VM:
govc vm.power -on vmname
Retrieve the VM IP address:
HOST=$(govc vm.ip vmname)
Use SSH to log in to the VM, using the username and password you specified in your blueprint:
$ ssh admin@HOST
Chapter 10. Creating RHEL for Edge AMI images
You can create a RHEL for Edge edge-ami
customized image by using RHEL image builder. The RHEL for Edge edge-ami
has Ignition support to inject the user configuration into the images at an early stage of the boot process. Then, you can upload the image to AWS cloud and launch an EC2 instance in AWS. You can use the AMI image type on AMD or Intel 64-bit architectures.
10.1. Creating a blueprint for Edge AMI images
Create a blueprint for the edge-ami
image and customize it with the customizations.ignition
section. With that, you can create your image and when booting the image, inject the user configuration.
Prerequisites
You have created an Ignition configuration file. For example:
{ "ignition":{ "version":"3.3.0" }, "passwd":{ "users":[ { "groups":[ "wheel" ], "name":"core", "passwordHash":"$6$jfuNnO9t1Bv7N" } ] } }
For more details, see Creating an Ignition configuration file.
Procedure
Create a blueprint in the Tom’s Obvious, Minimal Language (TOML) format, with the following content:
name = "ami-edge-image" description = "Blueprint for Edge AMI image" version = "0.0.1" packages = ["cloud-init"] modules = [] groups = [] distro = "" [[customizations.user]] name = "admin" password = "admin" groups = ["wheel"] [customizations.ignition.firstboot] url = http://<IP_address>:8080/config.ig
Where:
-
The
name
is the name anddescription
is the description for your blueprint. -
The
version
is the version number according to the Semantic Versioning scheme. -
The
modules
andpackages
describe the package name and matching version glob to be installed into the image. For example, the packagename = "open-vm-tools"
. Notice that currently there are no differences between packages and modules. -
The
groups
are packages groups to be installed into the image. For examplegroups = "wheel"
. If you do not know the modules and groups, leave them empty. -
The
customizations.user
creates a username and password to log in to the VM. The
customizations.ignition.firstboot
contains the URL where the Ignition configuration file is being served.NoteBy default, the
open-vm-tools
package is not included in theedge-vsphere
image. If you need this package, you must include it in the blueprint customization.
-
The
Import the blueprint to the image builder server:
# composer-cli blueprints push <blueprint-name>.toml
List the existing blueprints to check whether the created blueprint is successfully pushed and exists:
# composer-cli blueprints show <blueprint-name>
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve <blueprint-name>
Next steps
-
Use the blueprint you created to build your
edge-ami
image.
10.2. Creating a RHEL for Edge AMI image
Create a RHEL for Edge edge-ami
image in the RHEL image builder command-line interface.
Prerequisites
-
You created a blueprint for the
edge-ami
image. -
You served an OSTree repository of the commit to embed it in the image. For example,
http://10.0.2.2:8080/repo
. For more details, see Setting up a web server to install RHEL for Edge image.
Procedure
Start the compose of a
edge-ami
image:# composer-cli compose start-ostree <blueprint-name> edge-ami –ref rhel/9/x86_64/edge --url <ostree repo url>
The
--<ostree repo url>
is the URL of your repo, for example:http://10.88.0.1:8080/{<blueprint-name>}/repo
.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also, keep the UUID number handy for further tasks.
Check the image compose status:
# composer-cli compose status
The output displays the status in the following format:
$ <UUID> RUNNING date <blueprint-name> <blueprint-version> edge-ami
After the compose process finishes, download the resulting image file:
# composer-cli compose image <UUID>
Next steps
-
Upload the
edge-ami
image to AWS
10.3. Uploading a RHEL Edge AMI image to AWS
Upload the edge-ami
image to Amazon AWS Cloud service provider by using the CLI.
Prerequisites
-
You have an
Access Key ID
configured in the AWS IAM account manager. - You have a writable S3 bucket prepared.
- You have created the required roles for your AWS bucket.
-
You have the
aws-cli
tool installed .
Procedure
Configure the
aws-cli
tool:$ aws configure
Configure your profile. Run the command and enter your Access key ID credential, Secret access key, Default region name, and default output name:
$ aws configure --profile
List the existing buckets:
$ aws s3 ls
Upload your image to S3:
$ aws s3 cp <path_to_image/image> s3://<your_bucket_name>
List the image in the S3 bucket:
$ aws s3 ls s3://<your_bucket_name>
Create a
container-simple.json
file. Replace the "URL" content with the S3 bucket. For example:s3://rhel-edge-ami-us-west-2/2ba3c125-cc58-4cc0-861a-4cc78e892df6-image.raw
.{ "Description": "RHEL for Edge image", "Format": "edge-ami", "Url": "s3://rhel-edge-ami-us-west-2/UUID-image.raw" }
Import the
edge.ami
image to the S3 bucket as an EC2 snapshot.NoteThe EC2 image must be in the same region that you have created the S3 bucket.
$ aws ec2 import-snapshot --description "RHEL edge" \ --disk-container file://container-simple.json --region us-west-2
The following .
json
: is an example of the command output:{ "Description": "RHEL for Edge image", "Format": "edge-ami", "Url": "s3://rhel-edge-ami-us-west-2/UUID-image.raw" }
-
Take note of "ImportTaskId" value from the
.json
file. Use it to check the import status. In this example, the "ImportTaskId" isimport-snap-0f3055c4b7a454c85
. Check the import status of the snapshot, by using the "ImportTaskId" value from the output
.json
file from the previous step:$ aws ec2 describe-import-snapshot-tasks \ --import-task-ids import-snap-0f3055c4b7a454c85 { "ImportSnapshotTasks": [ { "Description": "RHEL edge", "ImportTaskId": "import-snap-0f3055c4b7a454c85", "SnapshotTaskDetail": { "Description": "RHEL edge", "DiskImageSize": 10737418240.0, "Format": "RAW", "SnapshotId": "snap-001b267e752039eab", "Status": "completed", "Url": "s3://rhel-edge-ami-us-west-2/2ba3c125-cc58-4cc0-861a-4cc78e892df6-image.raw", "UserBucket": { "S3Bucket": "rhel-edge-ami-us-west-2", "S3Key": "2ba3c125-cc58-4cc0-861a-4cc78e892df6-image.raw" } }, "Tags": [] } ] }
Run this command until the "Status" is marked as "completed". After that, you can access EC2 to create the AMI image from the snapshot, and launch it.
Verification
To confirm that the image upload was successful:
- Access EC2 in the menu and select the correct region in the AWS console. The image must have the available status, to indicate that it was successfully uploaded.
On the dashboard, select your image and click Launch.
When launching the new instance, you must select UEFI as the boot mode, and choose at least 4GB of RAM for the EC2 image.
-
You can log in into the
edge-ami
on AWS by using the username and password you created with the Ignition configuration.
Additional resources
Chapter 11. Automatically provisioning and onboarding RHEL for Edge devices with FDO
You can build a RHEL for Edge Simplified Installer image, and provision it to a RHEL for Edge image. The FIDO Device Onboarding (FDO) process automatically provisions and onboards your Edge devices, and exchanges data with other devices and systems connected on the networks.
Red Hat provides the FDO
process as a Technology Preview feature and should run on secure networks. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
11.1. The FIDO Device Onboarding (FDO) process
The FIDO Device Onboarding (FDO) is the process that:
- Provisions and onboards a device.
- Automatically configures credentials for this device. The FDO process is an automatic onboarding mechanism that is triggered by the installation of a new device.
- Enables this device to securely connect and interact on the network.
With FIDO Device Onboarding (FDO), you can perform a secure device onboarding by adding new devices into your IoT architecture. This includes the specified device configuration that needs to be trusted and integrated with the rest of the running systems. The FDO process is an automatic onboarding mechanism that is triggered by the installation of a new device.
The FDO protocol performs the following tasks:
- Solves the trust and chain of ownership along with the automation needed to securely onboard a device at scale.
- Performs device initialization at the manufacturing stage and late device binding for its actual use. This means that actual binding of the device to a management system happens on the first boot of the device without requiring manual configuration on the device.
- Supports automated secure devices onboarding, that is, zero touch installation and onboarding that does not need any specialized person at the edge location. After the device is onboarded, the management platform can connect to it and apply patches, updates, and rollbacks.
With FDO, you can benefit from the following:
- FDO is a secure and simple way to enroll a device to a management platform. Instead of embedding a Kickstart configuration to the image, FDO applies the device credentials during the device first boot directly to the ISO image.
- FDO solves the issue of late binding to a device, enabling any sensitive data to be shared over a secure FDO channel.
- FDO cryptographically identifies the system identity and ownership before enrolling and passing the configuration and other secrets to the system. That enables non-technical users to power-on the system.
To build a RHEL for Edge Simplified Installer image and automatically onboard it, provide an existing OSTree commit. The resulting simplified image contains a raw image that has the OSTree commit deployed. After you boot the Simplified installer ISO image, it provisions a RHEL for Edge system that you can use on a hard disk or as a boot image in a virtual machine.
The RHEL for Edge Simplified Installer image is optimized for unattended installation to a device and supports both network-based deployment and non-network-based deployments. However, for network-based deployment, it supports only UEFI HTTP boot.
The FDO protocol is based on the following servers:
- Manufacturing server
- Generates the device credentials.
- Creates an Ownership voucher that is used to set the ownership of the device, later in the process.
- Binds the device to a specific management platform.
- Owner management system
- Receives the Ownership voucher from the Manufacturing server and becomes the owner of the associated device.
- Later in the process, it creates a secure channel between the device and the Owner onboarding server after the device authentication.
- Uses the secure channel to send the required information, such as files and scripts for the onboarding automation to the device.
- Service-info API server
- Based on Service-info API server’s configuration and modules available on the client, it performs the final steps of onboarding on target client devices, such as copying SSH keys and files, executing commands, creating users, encrypting disks and so on
- Rendezvous server
- Gets the Ownership voucher from the Owner management system and makes a mapping of the device UUID to the Owner server IP. Then, the Rendezvous server matches the device UUID with a target platform and informs the device about which Owner onboarding server endpoint this device must use.
- During the first boot, the Rendezvous server will be the contact point for the device and it will direct the device to the owner, so that the device and the owner can establish a secure channel.
- Device client
This is installed on the device. The Device client performs the following actions:
- Starts the queries to the multiple servers where the onboarding automation will be executed.
- Uses TCP/IP protocols to communicate with the servers.
The following diagram represents the FIDO device onboarding workflow:
Figure 11.1. Deploying RHEL for Edge in non-network environment
At the Device Initialization, the device contacts the Manufacturing server to get the FDO credentials, a set of certificates and keys to be installed on the operating system with the Rendezvous server endpoint (URL). It also gets the Ownership Voucher, that is maintained separately in case you need to change the owner assignment.
- The Device contacts the Manufacturing server
- The Manufacturing server generates an Ownership Voucher and the Device Credentials for the Device.
- The Ownership Voucher is transferred to the Owner onboarding server.
At the On-site onboarding, the Device gets the Rendezvous server endpoint (URL) from its device credentials and contacts Rendezvous server endpoint to start the onboarding process, which will redirect it to the Owner management system, that is formed by the Owner onboarding server and the Service Info API server.
- The Owner onboarding server transfers the Ownership Voucher to the Rendezvous server, which makes a mapping of the Ownership Voucher to the Owner.
- The device client reads device credentials.
- The device client connects to the network.
- After connecting to the network, the Device client contacts the Rendezvous server.
- The Rendezvous server sends the owner endpoint URL to the Device Client, and registers the device.
- The Device client connects to the Owner onboarding server shared by the Rendezvous server.
- The Device proves that it is the correct device by signing a statement with a device key.
- The Owner onboarding server proves itself correct by signing a statement with the last key of the Owner Voucher.
- The Owner onboarding server transfers the information of the Device to the Service Info API server.
- The Service info API server sends the configuration for the Device.
- The Device is onboarded.
11.2. Automatically provisioning and onboarding RHEL for Edge devices
To build a RHEL for Edge Simplified Installer image and automatically onboard it, provide an existing OSTree commit. The resulting simplified image contains a raw image that has the OSTree commit deployed. After you boot the Simplified installer ISO image, it provisions a RHEL for Edge system that you can use on a hard disk or as a boot image in a virtual machine.
The RHEL for Edge Simplified Installer image is optimized for unattended installation to a device and supports both network-based deployment and non-network-based deployments. However, for network-based deployment, it supports only UEFI HTTP boot.
Automatically provisioning and onboarding a RHEL for Edge device involves the following high-level steps:
- Install and register a RHEL system
- Install RHEL image builder
By using RHEL image builder, create a blueprint with customizations for RHEL for a
rhel-edge-container
image type.name = "rhel-edge-container" description = "Minimal RHEL for Edge Container blueprint" version = "0.0.1"
- Import the RHEL for Edge Container blueprint in RHEL image builder
- Create a RHEL for Edge Container image
- Use the RHEL for Edge Container image to serve the OSTree commit, which will be later used when building the RHEL for Edge Simplified Installer image type
Create a blueprint for and
edge-simplified-installer
image type with customizations for storage device path and FDO customizationsname = "rhel-edge-simplified-installer-with-fdo" description = "Minimal RHEL for Edge Simplified Installer with FDO blueprint" version = "0.0.1" packages = [] modules = [] groups = [] distro = "" [customizations] installation_device = "/dev/vda" [customizations.fdo] manufacturing_server_url = "http://10.0.0.2:8080" diun_pub_key_insecure = "true"
- Build a simplified installer RHEL for Edge image
- Download the RHEL for Edge simplified installer image
-
At this point, the FDO server infrastructure should be up and running, and the specific onboarding details handled by the
service-info API
server, that is part of the owner’s infrastructure, are configured - Install the simplified installer ISO image to a device. The FDO client runs on the Simplified Installer ISO and the UEFI directory structure makes the image bootable.
- The network configuration enables the device to reach out to the manufacturing server to perform the initial device credential exchange.
- After the system reaches the endpoint, the device credentials are created for the device.
- The device uses the device credentials to reach the Rendezvous server, where it checks the cryptographic credentials based on the vouchers that the Rendezvous server has, and then the Rendezvous server redirects the device to the Owner server.
- The device contacts the Owner server. They establish a mutual trust and the final steps of onboarding happen based on the configuration of the Service-info API server. For example, it installs the SSH keys in the device, transfer the files, create the users, run the commands, encrypt the filesystem, and so on.
Additional resources
11.3. Generating key and certificates
To run the FIDO Device Onboarding (FDO) infrastructure, you need to generate keys and certificates. FDO generates these keys and certificates to configure the manufacturing server. FDO automatically generates the certificates and .yaml
configuration files when you install the services, and re-creating them is optional. After you install and start the services, it runs with the default settings.
Red Hat provides the fdo-admin-tool
tool as a Technology Preview feature and should run on secure networks. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
Prerequisites
-
You installed the
fdo-admin-cli
RPM package
Procedure
Generate the keys and certificates in the
/etc/fdo
directory:$ for i in "diun" "manufacturer" "device-ca" "owner"; do fdo-admin-tool generate-key-and-cert $i; done $ ls keys device_ca_cert.pem device_ca_key.der diun_cert.pem diun_key.der manufacturer_cert.pem manufacturer_key.der owner_cert.pem owner_key.der
Check the key and certificates that were created in the
/etc/fdo/keys
directory:$ tree keys
You can see the following output:
– device_ca_cert.pem – device_ca_key.der – diun_cert.pem – diun_key.dre – manufacturer_cert.pem – manufacturer_key.der – owner_cert.pem – owner_key.pem
Additional resources
-
See the
fdo-admin-tool generate-key-and-cert –-help
manual page
11.4. Installing and running the manufacturing server
The fdo-manufacturing-server
RPM package enables you to run the Manufacturing Server component of the FDO protocol. It also stores other components, such as the owner vouchers, the manufacturer keys, and information about the manufacturing sessions. During the device installation, the Manufacturing server generates the device credentials for the specific device, including its GUID, rendezvous information and other metadata. Later on in the process, the device uses this rendezvous information to contact the Rendezvous server.
Red Hat provides the fdo-manufacturing-server
tool as a Technology Preview feature and should run on secure networks because Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
To install the manufacturing server
RPM package, complete the following steps:
Procedure
Install the
fdo-admin-cli
package:# dnf install -y fdo-admin-cli
Check if the
fdo-manufacturing-server
RPM package is installed:$ rpm -qa | grep fdo-manufacturing-server --refresh
Check if the files were correctly installed:
$ ls /usr/share/doc/fdo
You can see the following output:
Output: manufacturing-server.yml owner-onboarding-server.yml rendezvous-info.yml rendezvous-server.yml serviceinfo-api-server.yml
Optional: Check the content of each file, for example:
$ cat /usr/share/doc/fdo/manufacturing-server.yml
Configure the Manufacturing server. You must provide the following information:
- The Manufacturing server URL
- The IP address or DNS name for the Rendezvous server
The path to the keys and certificates you generated. See Generating key and certificates.
You can find an example of a Manufacturing server configuration file in the
/usr/share/doc/fdo/manufacturing-server.yml
directory. The following is amanufacturing server.yml
example that is created and saved in the/etc/fdo
directory. It contains paths to the directories, certificates, keys that you created, the Rendezvous server IP address and the default port.session_store_driver: Directory: path: /etc/fdo/stores/manufacturing_sessions/ ownership_voucher_store_driver: Directory: path: /etc/fdo/stores/owner_vouchers public_key_store_driver: Directory: path: /etc/fdo/stores/manufacturer_keys bind: "0.0.0.0:8080" protocols: plain_di: false diun: mfg_string_type: SerialNumber key_type: SECP384R1 allowed_key_storage_types: - Tpm - FileSystem key_path: /etc/fdo/keys/diun_key.der cert_path: /etc/fdo/keys/diun_cert.pem rendezvous_info: - deviceport: 8082 ip_address: 192.168.122.99 ownerport: 8082 protocol: http manufacturing: manufacturer_cert_path: /etc/fdo/keys/manufacturer_cert.pem device_cert_ca_private_key: /etc/fdo/keys/device_ca_key.der device_cert_ca_chain: /etc/fdo/keys/device_ca_cert.pem owner_cert_path: /etc/fdo/keys/owner_cert.pem manufacturer_private_key: /etc/fdo/keys/manufacturer_key.der
Start the Manufacturing server.
Check if the systemd unit file are in the server:
# systemctl list-unit-files | grep fdo | grep manufacturing fdo-manufacturing-server.service disabled disabled
Enable and start the manufacturing server.
# systemctl enable --now fdo-manufacturing-server.service
Open the default ports in your firewall:
# firewall-cmd --add-port=8080/tcp --permanent # systemctl restart firewalld
Ensure that the service is listening on the port 8080:
# ss -ltn
- Install RHEL for Edge onto your system using the simplified installer. See Building simplified installer images to provision a RHEL for Edge image.
Additional resources
11.5. Installing, configuring, and running the Rendezvous server
Install the fdo-rendezvous-server
RPM package to enable the systems to receive the voucher generated by the Manufacturing server during the first device boot. The Rendezvous server then matches the device UUID with the target platform or cloud and informs the device about which Owner server endpoint the device must use.
Prerequisites
-
You created a
manufacturer_cert.pem
certificate. See Generating key and certificates. -
You copied the
manufacturer_cert.pem
certificate to the/etc/fdo/keys
directory in the Rendezvous server.
Procedure
Install the
fdo-rendezvous-server
RPM packages:# dnf install -y fdo-rendezvous-server
Create the
rendezvous-server.yml
configuration file, including the path to the manufacturer certificate. You can find an example in/usr/share/doc/fdo/rendezvous-server.yml
. The following example shows a configuration file that is saved in/etc/fdo/rendezvous-server.yml
.storage_driver: Directory: path: /etc/fdo/stores/rendezvous_registered session_store_driver: Directory: path: /etc/fdo/stores/rendezvous_sessions trusted_manufacturer_keys_path: /etc/fdo/keys/manufacturer_cert.pem max_wait_seconds: ~ bind: "0.0.0.0:8082"
Check the Rendezvous server service status:
# systemctl list-unit-files | grep fdo | grep rende fdo-rendezvous-server.service disabled disabled
If the service is stopped and disabled, enable and start it:
# systemctl enable --now fdo-rendezvous-server.service
Check that the server is listening on the default configured port 8082:
# ss -ltn
Open the port if you have a firewall configured on this server:
# firewall-cmd --add-port=8082/tcp --permanent # systemctl restart firewalld
11.6. Installing, configuring, and running the Owner server
Install the fdo-owner-cli
and fdo-owner-onboarding-server
RPM package to enable the systems to receive the voucher generated by the Manufacturing server during the first device boot. The Rendezvous server then matches the device UUID with the target platform or cloud and informs the device about which Owner server endpoint the device must use.
Prerequisites
- The device where the server will be deployed has a Trusted Platform Module (TPM) device to encrypt the disk. If not, you will get an error when booting the RHEL for Edge device.
-
You created the
device_ca_cert.pem
,owner_key.der
, andowner_cert.pem
with keys and certificates and copied them into the/etc/fdo/keys
directory.
Procedure
Install the required RPMs in this server:
# dnf install -y fdo-owner-cli fdo-owner-onboarding-server
Prepare the
owner-onboarding-server.yml
configuration file and save it to the/etc/fdo/
directory. Include the path to the certificates you already copied and information about where to publish the Owner server service in this file.The following is an example available in
/usr/share/doc/fdo/owner-onboarding-server.yml
. You can find references to the Service Info API, such as the URL or the authentication token.--- ownership_voucher_store_driver: Directory: path: /etc/fdo/stores/owner_vouchers session_store_driver: Directory: path: /etc/fdo/stores/owner_onboarding_sessions trusted_device_keys_path: /etc/fdo/keys/device_ca_cert.pem owner_private_key_path: /etc/fdo/keys/owner_key.der owner_public_key_path: /etc/fdo/keys/owner_cert.pem bind: "0.0.0.0:8081" service_info_api_url: "http://localhost:8083/device_info" service_info_api_authentication: BearerToken: token: Kpt5P/5flBkaiNSvDYS3cEdBQXJn2Zv9n1D50431/lo= owner_addresses: - transport: http addresses: - ip_address: 192.168.122.149
Create and configure the Service Info API.
Add the automated information for onboarding, such as user creation, files to be copied or created, commands to be executed, disk to be encrypted, and so on. Use the Service Info API configuration file example in
/usr/share/doc/fdo/serviceinfo-api-server.yml
as a template to create the configuration file under/etc/fdo/
.--- service_info: initial_user: username: admin sshkeys: - "ssh-rsa AAAA...." files: - path: /root/resolv.conf source_path: /etc/resolv.conf commands: - command: touch args: - /root/test return_stdout: true return_stderr: true diskencryption_clevis: - disk_label: /dev/vda4 binding: pin: tpm2 config: "{}" reencrypt: true additional_serviceinfo: ~ bind: "0.0.0.0:8083" device_specific_store_driver: Directory: path: /etc/fdo/stores/serviceinfo_api_devices service_info_auth_token: Kpt5P/5flBkaiNSvDYS3cEdBQXJn2Zv9n1D50431/lo= admin_auth_token: zJNoErq7aa0RusJ1w0tkTjdITdMCWYkndzVv7F0V42Q=
Check the status of the systemd units:
# systemctl list-unit-files | grep fdo fdo-owner-onboarding-server.service disabled disabled fdo-serviceinfo-api-server.service disabled disabled
If the service is stopped and disabled, enable and start it:
# systemctl enable --now fdo-owner-onboarding-server.service # systemctl enable --now fdo-serviceinfo-api-server.service
NoteYou must restart the
systemd
services every time you change the configuration files.
Check that the server is listening on the default configured port 8083:
# ss -ltn
Open the port if you have a firewall configured on this server:
# firewall-cmd --add-port=8081/tcp --permanent # firewall-cmd --add-port=8083/tcp --permanent # systemctl restart firewalld
11.7. Automatically onboarding a RHEL for Edge device by using FDO authentication
To prepare your device to automatically onboard a RHEL for Edge device and provision it as part of the installation process, complete the following steps:
Prerequisites
-
You built an OSTree commit for RHEL for Edge and used that to generate an
edge-simplified-installer
artifact. - Your device is assembled.
-
You installed the
fdo-manufacturing-server
RPM package. See Installing the manufacturing server package.
Procedure
- Start the installation process by booting the RHEL for Edge simplified installer image on your device. You can install it from a CD-ROM or from a USB flash drive, for example.
Verify through the terminal that the device has reached the manufacturing service to perform the initial device credential exchange and has produced an ownership voucher.
You can find the ownership voucher at the storage location configured by the
ownership_voucher_store_driver:
parameter at themanufacturing-sever.yml
file.The directory should have an
ownership_voucher
file with a name in the GUID format which indicates that the correct device credentials were added to the device.The onboarding server uses the device credential to authenticate against the onboarding server. It then passes the configuration to the device. After the device receives the configuration from the onboarding server, it receives an SSH key and installs the operating system on the device. Finally, the system automatically reboots, encrypts it with a strong key stored at TPM.
Verification
After the device automatically reboots, you can log in to the device with the credentials that you created as part of the FDO process.
- Log in to the device by providing the username and password you created in the Service Info API.
Additional resources
Chapter 12. Using FDO to onboard RHEL for Edge devices with a database backend
You can use the FDO servers - manufacturer-server
, onboarding-server
, and rendezvous
- to support storing and querying Owner Vouchers from an SQL backend such as the SQLite or the PostgreSQL databases, instead of a file. This way, you can select an SQL datastore in the FDO server options, along with credentials and other parameters, to store the Owner Vouchers in the SQL database and thus onboard a RHEL for Edge device. The SQL files are packaged in the RPMs.
The SQL backend does not support all FDO features at this point.
12.1. Onboarding devices with an FDO database
Use an SQL database to onboard your Edge device. The following example uses the diesel
tool, but you can also use the SQLite or the PostgreSQL databases.
You can use different database storages in some servers with filesystem storage in other servers, for example, using the filesystem storage onboarding for the Manufacturing server and Postgres for Rendezvous and Owner servers.
Prerequisites
- You used FDO to generate the key and certificates to configure the manufacturing server. See link to [Generating key and certificates]
- You installed and configured the manufacturing server. See Installing and running the manufacturing server
- You installed and configured the rendezvous server. See Installing, configuring, and running the Rendezvous server
- You installed and configured the owner server. See Installing, configuring, and running the Owner server
-
You have your server configuration files in
/etc/fdo
-
You built an OSTree commit for RHEL for Edge and used that to generate an
edge-simplified-installer
artifact - Your device is assembled
-
You installed the
diesel
tool or a SQL database to your host. - You have configured your database system and have permissions to create tables.
Procedure
Install the following packages:
$ dnf install -y sqlite sqlite-devel libpq libpq-devel
-
Access the
/usr/share/doc/fdo/migrations/*
directory. It contains.sql
files that you need to create the database for each of the server and type combinations, after you have installed the manufacturing, rendezvous, and owner server’s RPMs. Initialize the database content. You can use either an SQL database, such as SQLite or PostgreSQL, or the
diesel
tool to run the SQL for you.- If you are using an SQL database, configure the database server with, for example, user creation, access management, for example. After you configure the database server, you can run the database.
-
If you do not want to run the
.sql
files in your database system, you can use thediesel
tool to run the sql for you. If you are using thediesel
tool, after you have configured the database, use thediesel migration run
command to create the database:
After you configure the DB system, you can use the .sql files installed at
/usr/share/doc/fdo/migrations/*
to create the database for each server type.You must use the
.sql
file that matches the server type and database type that you are initializing. For example, when initializing the Owner Onboarding Server in a PostgreSQL database, you must use the/usr/share/doc/fdo/migrations/migrations_owner_onboarding_server_postgres/up.sql
folder. Theup.sql
file in the migration folder creates the database, and thedown.sql
file destroys it.After you create the database, change the configuration file of the specific server to make it use the database.
Each server has a storage configuration section.
-
Manufacturer server:
ownership_voucher_store_driver
-
Owner server:
ownership_voucher_store_driver
Rendezvous server:
storage_driver
For the
manufacturing-server.yml
file, open it in your editor and change the store database:$ sudo editor manufacturing-server.yml
Change the
ownership_voucher_store_driver
configuration under the Directory section:$ /home/rhel/fido-device-onboard-rs/aio-dir/stores/owner_vouchers
- Specify the following details:
- The database type that you are using: SQLite or PostgreSQL
The server type: for example, when using PostgreSQL, set the following configuration:
ownership_voucher_store_driver: Postgres: Manufacturer
- Repeat the steps to configure the Owner server and Rendezvous server.
-
Manufacturer server:
Run the FDO onboarding service. See Automatically provisioning and onboarding RHEL for Edge devices with FDO for more details. Start the FDO onboarding process device initialization by running the manufacturing server:
$ sudo LOG-LEVEL=debug SQLITE_MANUFACTURER-DATABASE_URL=./manufacturer-db.sqlite ./usr/libexec/fdo/fdo-manufacturing-server
The onboarding process happens in two phases:
- The device initialization phase that usually happens in the manufacturing site.
The device onboarding process that happens at the final destination of the device.
As a result, the Ownership Vouchers stored in the Manufacturing Server’s Database must be exported and transferred to the final Owner Database.
To export the Ownership Vouchers, from the manufacturing-vouchers database file, copy the Owner Voucher to the owner to continue the FDO onboarding protocol.
Create a folder
export
:$ mkdir export
Export the Owner Voucher present in the Manufacturing Database by providing all the variables required for the command.
$ fdo-owner-tool export-manufacturer-vouchers DB_TYPE DB_URL PATH [GUID]
- DB_TYPE
- The type of the Manufacturing DB holding the Owner Vouchers: sqlite, postgres
- DB_URL
- The database connection URL, or path to the database file
- PATH
- The path to the directory where the Owner Vouchers will be exported
- GUID
- GUID of the owner voucher to be exported. If you do not provide a GUID, all the Owner Vouchers will be exported.
The OVs must be delivered to the final Owner’s database. For that, use the
fdo-owner-tool
to import the Owner vouchers. Change to owner database. Import the Ownership Vouchers by running the following command:$ fdo-owner-tool import-ownership-vouchers DB_TYPE DB_URL SOURCE_PATH
- DB_TYPE
- Type of the Owner DB to import the OVs. Possible values: sqlite, postgres
- DB_URL
- DB connection URL or path to DB file
- SOURCE_PATH
- Path to the OV to be imported, or path to a directory where all the OVs to be imported are located
The command reads each OV that is specified in the <SOURCE_PATH> once, one by one, and tries to import them into the database. If the command finds errors, it returns an output with the GUIDs of the OVs that are faulty and specified with information about what caused the error. The non-faulty OVs are imported into the database. The device receives the configuration from the onboarding server. Then, the device receives an SSH key and starts to install the operating system on the device. Finally, the operating system automatically reboots in the device, and encrypts the device with a strong key stored at TPM.
Chapter 13. Deploying a RHEL for Edge image in a network-based environment
You can deploy a RHEL for Edge image using the RHEL installer graphical user interface or a Kickstart file. The overall process for deploying a RHEL for Edge image depends on whether your deployment environment is network-based or non-network-based.
To deploy the images on bare metal, use a Kickstart file.
Network-based deployments
Deploying a RHEL for Edge image in a network-based environment involves the following high-level steps:
- Extract the image file contents.
- Set up a web server
- Install the image
13.1. Extracting the RHEL for Edge image commit
After you download the commit, extract the .tar
file and note the ref name and the commit ID.
The downloaded commit file consists of a .tar
file with an OSTree
repository. The OSTree
repository has a commit and a compose.json
file.
The compose.json
file has information metadata about the commit with information such as the "Ref", the reference ID and the commit ID. The commit ID has the RPM packages.
To extract the package contents, perform the following the steps:
Prerequisites
- Create a Kickstart file or use an existing one.
Procedure
Extract the downloaded image
.tar
file:# tar xvf <UUID>-commit.tar
Go to the directory where you have extracted the
.tar
file.It has a
compose.json
file and an OSTree directory. Thecompose.json
file has the commit number and theOSTree
directory has the RPM packages.Open the
compose.json
file and note the commit ID number. You need this number handy when you proceed to set up a web server.If you have the
jq
JSON processor installed, you can also retrieve the commit ID by using thejq
tool:# jq '.["ostree-commit"]' < compose.json
List the RPM packages in the commit.
# rpm-ostree db list rhel/9/x86_64/edge --repo=repo
Use a Kickstart file to run the RHEL installer. Optionally, you can use any existing file or can create one by using the Kickstart Generator tool.
In the Kickstart file, ensure that you include the details about how to provision the file system, create a user, and how to fetch and deploy the RHEL for Edge image. The RHEL installer uses this information during the installation process.
The following is a Kickstart file example:
lang en_US.UTF-8 keyboard us timezone Etc/UTC --isUtc text zerombr clearpart --all --initlabel autopart reboot user --name=core --group=wheel sshkey --username=core "ssh-rsa AAAA3Nza…." rootpw --lock network --bootproto=dhcp ostreesetup --nogpg --osname=rhel --remote=edge --url=https://mirror.example.com/repo/ --ref=rhel/9/x86_64/edge
The OStree-based installation uses the
ostreesetup
command to set up the configuration. It fetches the OSTree commit, by using the following flags:-
--nogpg
- Disable GNU Privacy Guard (GPG) key verification. -
--osname
- Management root for the operating system installation. -
--remote
- Management root for the operating system installation -
--url
- URL of the repository to install from. -
--ref
- Name of the branch from the repository that the installation uses. --url=http://mirror.example.com/repo/
- is the address of the host system where you extracted the edge commit and served it overnginx
. You can use the address to reach the host system from the guest computer.For example, if you extract the commit image in the
/var/www/html
directory and serve the commit overnginx
on a computer whose hostname iswww.example.com
, the value of the--url
parameter ishttp://www.example.com/repo
.NoteUse the http protocol to start a service to serve the commit, because https is not enabled on the Apache HTTP Server.
-
Additional resources
13.2. Setting up a web server to install RHEL for Edge images
After you have extracted the RHEL for Edge image contents, set up a web server to provide the image commit details to the RHEL installer by using HTTP.
The following example provides the steps to set up a web server by using a container.
Prerequisites
- You have installed Podman on your system. See the Red Hat Knowledgebase solution How do I install Podman in RHEL.
Procedure
Create the
nginx
configuration file with the following instructions:events { } http { server{ listen 8080; root /usr/share/nginx/html; } } pid /run/nginx.pid; daemon off;
Create a Dockerfile with the following instructions:
FROM registry.access.redhat.com/ubi8/ubi RUN dnf -y install nginx && dnf clean all COPY kickstart.ks /usr/share/nginx/html/ COPY repo /usr/share/nginx/html/ COPY nginx /etc/nginx.conf EXPOSE 8080 CMD ["/usr/sbin/nginx", "-c", "/etc/nginx.conf"] ARG commit ADD ${commit} /usr/share/nginx/html/
Where,
kickstart.ks
is the name of the Kickstart file from the RHEL for Edge image. The Kickstart file includes directive information. To help you manage the images later, it is advisable to include the checks and settings for greenboot checks. For that, you can update the Kickstart file to include the following settings:lang en_US.UTF-8 keyboard us timezone Etc/UTC --isUtc text zerombr clearpart --all --initlabel autopart reboot user --name=core --group=wheel sshkey --username=core "ssh-rsa AAAA3Nza…." ostreesetup --nogpg --osname=rhel --remote=edge --url=https://mirror.example.com/repo/ --ref=rhel/9/x86_64/edge %post cat << EOF > /etc/greenboot/check/required.d/check-dns.sh #!/bin/bash DNS_SERVER=$(grep nameserver /etc/resolv.conf | cut -f2 -d" ") COUNT=0 # check DNS server is available ping -c1 $DNS_SERVER while [ $? != '0' ] && [ $COUNT -lt 10 ]; do COUNT++ echo "Checking for DNS: Attempt $COUNT ." sleep 10 ping -c 1 $DNS_SERVER done EOF %end
Any HTTP service can host the OSTree repository, and the example, which uses a container, is just an option for how to do this. The Dockerfile performs the following tasks:
- Uses the latest Universal Base Image (UBI)
- Installs the web server (nginx)
- Adds the Kickstart file to the server
- Adds the RHEL for Edge image commit to the server
Build a Docker container
# podman build -t name-of-container-image --build-arg commit=uuid-commit.tar .
Run the container
# podman run --rm -d -p port:8080 localhost/name-of-container-image
As a result, the server is set up and ready to start the RHEL Installer by using the
commit.tar
repository and the Kickstart file.
13.3. Performing an attended installation to an edge device by using Kickstart
For an attended installation in a network-based environment, you can install the RHEL for Edge image to a device by using the RHEL Installer ISO, a Kickstart file, and a web server. The web server serves the RHEL for Edge Commit and the Kickstart file to boot the RHEL Installer ISO image.
Prerequisites
- You have made the RHEL for Edge Commit available by running a web server. See Setting up a web server to install RHEL for Edge images.
-
You have created a
.qcow2
disk image to be used as the target of the attended installation. See Creating a virtual disk image by using qemu-img.
Procedure
Create a Kickstart file. The following is an example in which the
ostreesetup
directive instructs the Anaconda Installer to fetch and deploy the commit. Additionally, it creates a user and a password.lang en_US.UTF-8 keyboard us timezone UTC zerombr clearpart --all --initlabel autopart --type=plain --fstype=xfs --nohome reboot text network --bootproto=dhcp user --name=core --groups=wheel --password=edge services --enabled=ostree-remount ostreesetup --nogpg --url=http://edge_device_ip:port/repo/ --osname=rhel --remote=edge --ref=rhel/9/x86_64/edge
Run the RHEL Anaconda Installer by using the
libvirt virt-install
utility to create a virtual machine (VM) with a RHEL operating system. Use the.qcow2
disk image as the target disk in the attended installation:virt-install \ --name rhel-edge-test-1 \ --memory 2048 \ --vcpus 2 \ --disk path=prepared_disk_image.qcow2,format=qcow2,size=8 \ --os-variant rhel9 \ --cdrom /home/username/Downloads/rhel-9-x86_64-boot.iso
On the installation screen:
Figure 13.1. Red Hat Enterprise Linux boot menu
Press the
key to add an additional kernel parameter:inst.ks=http://web-server_device_ip:port/kickstart.ks
The kernel parameter specifies that you want to install RHEL by using the Kickstart file and not the RHEL image contained in the RHEL Installer.
After adding the kernel parameters, press Ctrl+X to boot the RHEL installation by using the Kickstart file.
The RHEL Installer starts, fetches the Kickstart file from the server (HTTP) endpoint and executes the commands, including the command to install the RHEL for Edge image commit from the HTTP endpoint. After the installation completes, the RHEL Installer prompts you for login details.
Verification
- On the Login screen, enter your user account credentials and click .
Verify whether the RHEL for Edge image is successfully installed.
$ rpm-ostree status
The command output provides the image commit ID and shows that the installation is successful.
Following is a sample output:
State: idle Deployments: * ostree://edge:rhel/9/x86_64/edge Timestamp: 2020-09-18T20:06:54Z Commit: 836e637095554e0b634a0a48ea05c75280519dd6576a392635e6fa7d4d5e96
Additional resources
- How to embed a Kickstart file into an ISO image (Red Hat Knowledgebase)
- Booting the installation
13.4. Performing an unattended installation to an edge device by using Kickstart
For an unattended installation in a network-based environment, you can install the RHEL for Edge image to an Edge device by using a Kickstart file and a web server. The web server serves the RHEL for Edge Commit and the Kickstart file, and both artifacts are used to start the RHEL Installer ISO image.
Prerequisites
-
You have the
qemu-img
utility installed on your host system. -
You have created a
.qcow2
disk image to install the commit you created. See Creating a system image with RHEL image builder in the CLI. - You have a running web server. See Creating a RHEL for Edge Container image for non-network-based deployments.
Procedure
- Run a RHEL for Edge Container image to start a web server. The server fetches the commit in the RHEL for Edge Container image and becomes available and running.
Run the RHEL Anaconda Installer, passing the customized
.qcow2
disk image, by usinglibvirt virt-install
:virt-install \ --name rhel-edge-test-1 \ --memory 2048 \ --vcpus 2 \ --disk path=prepared_disk_image.qcow2,format=qcow2,size=8 \ --os-variant rhel9 \ --cdrom /home/username/Downloads/rhel-9-x86_64-boot.iso
On the installation screen:
Figure 13.2. Red Hat Enterprise Linux boot menu
Press the
key and add the Kickstart kernel argument:inst.ks=http://web-server_device_ip:port/kickstart.ks
The kernel parameter specifies that you want to install RHEL by using the Kickstart file and not the RHEL image contained in the RHEL Installer.
After adding the kernel parameters, press Ctrl+X to boot the RHEL installation by using the Kickstart file.
The RHEL Installer starts, fetches the Kickstart file from the server (HTTP) endpoint, and executes the commands, including the command to install the RHEL for Edge image commit from the HTTP endpoint. After the installation completes, the RHEL Installer prompts you for login details.
Verification
- On the Login screen, enter your user account credentials and click .
Verify whether the RHEL for Edge image is successfully installed.
$ rpm-ostree status
The command output provides the image commit ID and shows that the installation is successful.
The following is a sample output:
State: idle Deployments: * ostree://edge:rhel/9/x86_64/edge Timestamp: 2020-09-18T20:06:54Z Commit: 836e637095554e0b634a0a48ea05c75280519dd6576a392635e6fa7d4d5e96
Additional resources
- How to embed a Kickstart file into an ISO image (Red Hat Knowledgebase)
- Booting the installation
Chapter 14. Deploying a RHEL for Edge image in a non-network-based environment
The RHEL for Edge Container (.tar
) in combination with the RHEL for Edge Installer (.iso
) image type result in a ISO image. The ISO image can be used in disconnected environments during the image deployment to a device. However, network access might require network access to build the different artifacts.
Deploying a RHEL for Edge image in a non-network-based environment involves the following high-level steps:
- Download the RHEL for Edge Container. See Downloading a RHEL for Edge image for information about how to download the RHEL for Edge image.
- Load the RHEL for Edge Container image into Podman
- Run the RHEL for Edge Container image into Podman
- Load the RHEL for Edge Installer blueprint
- Build the RHEL for Edge Installer image
-
Prepare a
.qcow2
disk - Boot the Virtual Machine (VM)
- Install the image
14.1. Creating a RHEL for Edge Container image for non-network-based deployments
You can build a running container by loading the downloaded RHEL for Edge Container OSTree commit into Podman. For that, follow the steps:
Prerequisites
- You created and downloaded a RHEL for Edge Container OSTree commit.
-
You have installed
Podman
on your system. See the Red Hat Knowledgebase solution How do I install Podman in RHEL.
Procedure
- Navigate to the directory where you have downloaded the RHEL for Edge Container OSTree commit.
Load the RHEL for Edge Container OSTree commit into
Podman
.$ sudo podman load -i UUID-container.tar
The command output provides the image ID, for example:
@8e0d51f061ff1a51d157804362bc875b649b27f2ae1e66566a15e7e6530cec63
Tag the new RHEL for Edge Container image, using the image ID generated by the previous step.
$ sudo podman tag image-ID localhost/edge-container
The
podman tag
command assigns an additional name to the local image.Run the container named
edge-container
.$ sudo podman run -d --name=edge-container -p 8080:8080 localhost/edge-container
The
podman run -d --name=edge-container
command assigns a name to your container-based on thelocalhost/edge-container
image.List containers:
$ sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2988198c4c4b …./localhost/edge-container /bin/bash 3 seconds ago Up 2 seconds ago edge-container
As a result, Podman
runs a container that serves an OSTree repository with the RHEL for Edge Container commit.
14.2. Creating a RHEL for Edge Installer image for non-network-based deployments
After you have built a running container to serve a repository with the RHEL for Edge Container
commit, create an RHEL for Edge Installer (.iso)
image. The RHEL for Edge Installer (.iso)
pulls the commit served by RHEL for Edge Container (.tar)
. After the RHEL for Edge Container
commit is loaded in Podman, it exposes the OSTree
in the URL format.
To create the RHEL for Edge Installer image in the CLI, follow the steps:
Prerequisites
- You created a blueprint for RHEL for Edge image.
- You created a RHEL for Edge Container image and deployed it using a web server.
Procedure
Begin to create the RHEL for Edge Installer image.
# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url URL-OSTree-repository blueprint-name image-type
Where,
- ref is the same value that customer used to build OSTree repository
- URL-OSTree-repository is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Creating a RHEL for Edge Container image for non-network-based deployments.
- blueprint-name is the RHEL for Edge Installer blueprint name.
image-type is
edge-installer
.A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also keep the UUID number handy for further tasks.
Check the image compose status.
# composer-cli compose status
The command output displays the status in the following format:
<UUID> RUNNING date blueprint-name blueprint-version image-type
NoteThe image creation process takes a few minutes to complete.
To interrupt the image creation process, run:
# composer-cli compose cancel <UUID>
To delete an existing image, run:
# composer-cli compose delete <UUID>
RHEL image builder pulls the commit that is being served by the running container during the image build.
After the image build is complete, you can download the resulting ISO image.
Download the image. See Downloading a RHEL for Edge image.
After the image is ready, you can use it for non-network deployments. See Installing the RHEL for Edge image for non-network-based deployments.
14.3. Installing the RHEL for Edge image for non-network-based deployments
To install the RHEL for Edge image, follow the steps:
Prerequisites
- You created a RHEL for Edge Installer ISO image.
- You stopped the running container.
- A disk image to install the commit you created.
-
You installed the
edk2-ovmf
package. -
You installed the
virt-viewer
package. You customized your blueprint with a user account. See Creating a blueprint for a RHEL for Edge image using RHEL image builder in RHEL web console.
WarningIf you do not define a user account customization in your blueprint, you will not be able to login to the ISO image.
Procedure
Create a
qcow
VM disk file to install the (.iso
) image. That is an image of a hard disk drive for the virtual machine (VM). For example:$ qemu-img create -f qcow2 diskfile.qcow2 20G
Use the
virt-install
command to boot the VM using the disk as a drive and the installer ISO image as a CD-ROM. For example:$ virt-install \ --boot uefi \ --name VM_NAME --memory 2048 \ --vcpus 2 \ --disk path=diskfile.qcow2 --cdrom /var/lib/libvirt/images/UUID-installer.iso \ --os-variant rhel9.0
This command instructs
virt-install
to:- Instructs the VM to use UEFI to boot, instead of the BIOS.
- Mount the installation ISO.
Use the hard disk drive image created in the first step.
It gives you an Anaconda Installer. The RHEL Installer starts, loads the Kickstart file from the ISO and executes the commands, including the command to install the RHEL for Edge image commit. Once the installation is complete, the RHEL installer prompts for login details.
NoteAnaconda is preconfigured to use the Container commit during the installation. However, you need to set up system configurations, such as disk partition, timezone, between others.
Connect to Anaconda GUI with
virt-viewer
to setup the system configuration:$ virt-viewer --connect qemu:///system --wait VM_NAME
- Reboot the system to finish the installation.
- On the login screen, specify your user account credentials and click .
Verification
Verify whether the RHEL for Edge image is successfully installed.
$ rpm-ostree status
The command output provides the image commit ID and shows that the installation is successful.
Chapter 15. Managing RHEL for Edge images
To manage the RHEL for Edge images, you can perform any of the following administrative tasks:
- Edit the RHEL for Edge image blueprint by using image builder in RHEL web console or in the command-line
- Build a commit update by using image builder command-line
- Update the RHEL for Edge images
-
Configure
rpm-ostree
remotes on nodes, to update node policy - Restore RHEL for Edge images manually or automatically by using greenboot
15.1. Editing a RHEL for Edge image blueprint by using image builder
You can edit the RHEL for Edge image blueprint to:
- Add additional components that you might require
- Modify the version of any existing component
- Remove any existing component
15.1.1. Adding a component to RHEL for Edge blueprint using image builder in RHEL web console
To add a component to a RHEL for Edge image blueprint, ensure that you have met the following prerequisites and then follow the procedure to edit the corresponding blueprint.
Prerequisites
- On a RHEL system, you have accessed the RHEL image builder dashboard.
- You have created a blueprint for RHEL for Edge image.
Procedure
On the RHEL image builder dashboard, click the blueprint that you want to edit.
To search for a specific blueprint, enter the blueprint name in the filter text box, and click
.On the upper right side of the blueprint, click
.The Edit blueprints wizard opens.
- On the Details page, update the blueprint name and click .
On the Packages page, follow the steps:
In the Available Packages, enter the package name that you want to add in the filter text box, and click .
A list with the component name appears.
- Click to add the component to the blueprint.
On the Review page, click .
The blueprint is now updated with the new package.
15.1.2. Removing a component from a blueprint using RHEL image builder in the web console
To remove one or more unwanted components from a blueprint that you created by using RHEL image builder, ensure that you have met the following prerequisites and then follow the procedure.
Prerequisites
- On a RHEL system, you have accessed the RHEL image builder dashboard.
- You have created a blueprint for RHEL for Edge image.
- You have added at least one component to the RHEL for Edge blueprint.
Procedure
On the RHEL image builder dashboard, click the blueprint that you want to edit.
To search for a specific blueprint, enter the blueprint name in the filter text box, and click
.On the upper right side of the blueprint, click
.The Edit blueprints wizard opens.
- On the Details page, update the blueprint name and click .
On the Packages page, follow the steps:
- From the Chosen packages, click to remove the chosen component. You can also click to remove all the packages at once.
On the Review page, click .
The blueprint is now updated.
15.1.3. Editing a RHEL for Edge image blueprint using command-line interface
You can change the specifications for your RHEL for Edge image blueprint by using RHEL image builder command-line. To do so, ensure that you have met the following prerequisites and then follow the procedure to edit the corresponding blueprint.
Prerequisites
- You have access to the RHEL image builder command-line.
- You have created a RHEL for Edge image blueprint.
Procedure
Save (export) the blueprint to a local text file:
# composer-cli blueprints save BLUEPRINT-NAME
Edit the
BLUEPRINT-NAME.toml
file with a text editor of your choice and make your changes.Before finishing with the edits, verify that the file is a valid blueprint:
Increase the version number.
Ensure that you use a Semantic Versioning scheme.
Noteif you do not change the version, the patch component of the version is increased automatically.
Check if the contents are valid TOML specifications. See the TOML documentation for more information.
NoteTOML documentation is a community product and is not supported by Red Hat. You can report any issues with the tool at https://github.com/toml-lang/toml/issues.
- Save the file and close the editor.
Push (import) the blueprint back into RHEL image builder server:
# composer-cli blueprints push BLUEPRINT-NAME.toml
NoteWhen pushing the blueprint back into the RHEL image builder server, provide the file name including the
.toml
extension.Verify that the contents uploaded to RHEL image builder match your edits:
# composer-cli blueprints show BLUEPRINT-NAME
Check whether the components and versions listed in the blueprint and their dependencies are valid:
# composer-cli blueprints depsolve BLUEPRINT-NAME
15.2. Updating RHEL for Edge images
15.2.1. How RHEL for Edge image updates are deployed
With RHEL for Edge images, you can either deploy the updates manually or can automate the deployment process. The updates are applied in an atomic manner, where the state of each update is known, and the updates are staged and applied only upon reboot. Because no changes are seen until you reboot the device, you can schedule a reboot to ensure the highest possible uptime.
During the image update, only the updated operating system content is transferred over the network. This makes the deployment process more efficient compared to transferring the entire image. The operating system binaries and libraries in /usr
are read-only
, and the read and write
state is maintained in /var
and /etc
directories.
When moving to a new deployment, the /etc
and the /var
directories are copied to the new deployment with read and write
permissions. The /usr
directory is copied as a soft link to the new deployment directory, with read-only
permissions.
The following diagram illustrates the RHEL for Edge image update deployment process:
By default, the new system is booted using a procedure similar to a chroot
operation, that is, the system enables control access to a filesystem while controlling the exposure to the underlying server environment. The new /sysroot
directory mainly has the following parts:
-
Repository database at the
/sysroot/ostree/repo
directory. -
File system revisions at the
/sysroot/ostree/deploy/rhel/deploy
directory, which are created by each operation in the system update. -
The
/sysroot/ostree/boot
directory, which links to deployments on the previous point. Note that/ostree
is a soft link to/sysroot/ostree
. The files from the/sysroot/ostree/boot
directory are not duplicated. The same file is used if it is not changed during the deployment. The files are hard-links to another file stored in the/sysroot/ostree/repo/objects
directory.
The operating system selects the deployment in the following way:
-
The
dracut
tool parses theostree
kernel argument in theinitramfs root
file system and sets up the/usr
directory as aread-only
bind mount. -
Bind the deployment directory in
/sysroot
to/
directory. -
Re-mount the operating system already mounted
dirs
using theMS_MOVE
mount flag
If anything goes wrong, you can perform a deployment rollback by removing the old deployments with the rpm-ostree
cleanup command. Each client machine contains an OSTree
repository stored in /ostree/repo
, and a set of deployments stored in /ostree/deploy/$STATEROOT/$CHECKSUM
.
With the deployment updates in RHEL for Edge image, you can benefit from a better system consistency across multiple devices, easier reproducibility, and better isolation between the pre and post system states change.
15.2.2. Building a commit update
You can build a commit update after making a change in the blueprint, such as:
- Adding an additional package that your system requires
- Modifying the package version of any existing component
- Removing any existing package.
Prerequisites
- You have updated a system which is running RHEL image builder.
- You created a blueprint update.
- You have previously created an OSTree repository and served it through HTTP. See Setting up a web server to install RHEL for Edge images.
Procedure
Start the compose of the new commit image, with the following arguments:
--url
,--ref
,blueprint-name
,edge-commit
.# composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url http://localhost:8080/repo <blueprint-name> edge-commit
The command instructs the compose process to fetch the metadata from the OStree repo before starting the compose. The resulting new OSTree commit contains a reference of the original OSTree commit as a parent image.
After the compose process finishes, fetch the
.tar
file.# composer-cli compose image <UUID>
Extract the commit to a temporary directory, so that you can store the commit history in the OSTree repo.
$ tar -xf UUID.tar -C /var/tmp
Inspect the resulting OSTree repo commit, by using the
tar -xf
command. It extracts the tar file to disk so you can inspect the resulting OSTree repo:$ ostree --repo=/var/tmp/repo log rhel/9/x86_64/edge commit d523ef801e8b1df69ddbf73ce810521b5c44e9127a379a4e3bba5889829546fa Parent: f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: f0f6703696331b661fa22d97358db48ba5f8b62711d9db83a00a79b3ae0dfe16 Date: 2023-06-04 20:22:28 /+0000 Version: 9
In the output example, there is a single OSTree commit in the repo that references a parent commit. The parent commit is the same checksum from the original OSTree commit that you previously made.
Merge the two commits by using the
ostree pull-local
command:$ sudo ostree --repo=/var/srv/httpd/repo pull-local /var/tmp/repo 20 metadata, 22 content objects imported; 0 bytes content written
This command copies any new metadata and content from the location on the disk, for example,
/var/tmp
, to a destination OSTree repo in/var/srv/httpd
.
Verification
Inspect the target OSTree repo:
$ ostree --repo=/var/srv/httpd/repo log rhel/9/x86_64/edge commit d523ef801e8b1df69ddbf73ce810521b5c44e9127a379a4e3bba5889829546fa Parent: f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: f0f6703696331b661fa22d97358db48ba5f8b62711d9db83a00a79b3ae0dfe16 Date: 2023-06-04 20:22:28 /+0000 Version: 9 (no subject) commit f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: 9054de3fe5f1210e3e52b38955bea0510915f89971e3b1ba121e15559d5f3a63 Date: 2023-06-04 20:01:08 /+0000 Version: 9 (no subject)
You can see that the target OSTree repo now contains two commits in the repository, in a logical order. After successful verification, you can update your RHEL for Edge system.
15.2.3. Deploying RHEL for Edge image updates manually
After you have edited a RHEL for Edge blueprint, you can update the image commit. RHEL image builder generates a new commit for the updated RHEL for Edge image. Use this new commit to deploy the image with latest package versions or with additional packages.
To deploy RHEL for Edge images updates, ensure that you meet the prerequisites and then follow the procedure.
Prerequisites
- On a RHEL system, you have accessed the RHEL image builder dashboard.
- You have created a RHEL for Edge image blueprint.
- You have edited the RHEL for Edge image blueprint.
Procedure
- On the RHEL image builder dashboard click .
On the Create Image window, perform the following steps:
In the Image output page:
- From the Select a blueprint dropdown list, select the blueprint that you edited.
-
From the Image output type dropdown list, select
RHEL for Edge Commit (.tar)
. Click .
In the OSTree settings page, enter:
- In the Repository URL field, enter the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image.
- In the Parent commit field, specify the parent commit ID that was previously generated. See Extracting RHEL for Edge image commit.
-
In the Ref field, you can either specify a name for your commit or leave it empty. By default, the web console specifies the
Ref
asrhel/9/arch_name/edge
. Click .
In the Review page, check the customizations and click . RHEL image builder starts to create a RHEL for Edge image for the updated blueprint. The image creation process takes a few minutes to complete.
To view the RHEL for Edge image creation progress, click the blueprint name from the breadcrumbs, and then click the Images tab.
The resulting image includes the latest packages that you have added, if any, and have the original
commit ID
as a parent.
Download the resulting RHEL for Edge Commit (
.tar
) image.-
From the Images tab, click to save the RHEL for Edge Commit (
.tar
) image to your system.
-
From the Images tab, click to save the RHEL for Edge Commit (
Extract the OSTree commit (
.tar
) file.# tar -xf UUID-commit.tar -C UPGRADE_FOLDER
Upgrade the OSTree repo:
# ostree --repo=/usr/share/nginx/html/repo pull-local UPGRADE_FOLDER # ostree --repo=/usr/share/nginx/html/repo summary -u
On the RHEL system provisioned, from the original edge image, verify the current status.
$ rpm-ostree status
If there is no new commit ID, run the following command to verify if there is any upgrade available:
$ rpm-ostree upgrade --check
The command output provides the current active OSTree commit ID.
Update OSTree to make the new OSTree commit ID available.
$ rpm-ostree upgrade
OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of this new commit update.
Check the current status again:
$ rpm-ostree status
You can now see that there are 2 commits available:
- The active parent commit.
- A new commit that is not active and contains 1 added difference.
To activate the new deployment and to make the new commit active, reboot your system.
# systemctl reboot
The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot.
-
If you want to boot into the newest deployment (commit), the
rpm-ostree
upgrade command automatically orders the boot entries so that the new deployment is first in the list. Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press . - Provide your login user account credentials.
Verify the OSTree status:
$ rpm-ostree status
The command output provides the active commit ID.
To view the changed packages, if any, run a diff between the parent commit and the new commit:
$ rpm-ostree db diff parent_commit new_commit
The update shows that the package you have installed is available and ready for use.
15.2.4. Deploying RHEL for Edge image updates manually using the command-line
After you have edited a RHEL for Edge blueprint, you can update the image commit. RHEL image builder generates a new commit for the updated RHEL for Edge image. Use the new commit to deploy the image with latest package versions or with additional packages using the CLI.
To deploy RHEL for Edge image updates using the CLI, ensure that you meet the prerequisites, and then follow the procedure.
Prerequisites
- You created the RHEL for Edge image blueprint.
- You edited the RHEL for Edge image blueprint. See Editing a RHEL for Edge image blueprint using command-line interface.
Procedure
Create the RHEL for Edge Commit (
.tar
) image with the following arguments:# composer-cli compose start-ostree --ref ostree_ref --url URL-OSTree-repository -blueprint_name_ image-type
where
-
ref
is the reference you provided during the creation of the RHEL for Edge Container commit. For example,rhel/9/x86_64/edge
. -
URL-OSTree-repository
is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image. image-type
isedge-commit
.RHEL image builder creates a RHEL for Edge image for the updated blueprint.
-
Check the RHEL for Edge image creation progress:
# composer-cli compose status
NoteThe image creation processes can take up to ten to thirty minutes to complete.
The resulting image includes the latest packages that you have added, if any, and has the original
commit ID
as a parent.- Download the resulting RHEL for Edge image. For more information, see Downloading a RHEL for Edge image using the RHEL image builder command-line interface.
Extract the OSTree commit.
# tar -xf UUID-commit.tar -C upgrade_folder
- Serve the OSTree commit by using httpd. See Setting up a web server to install RHEL for Edge image.
Upgrade the OSTree repo:
# ostree --repo=/var/www/html/repo pull-local /tmp/ostree-commit/repo # ostree --repo=/var/www/html/repo summary -u
On the RHEL system provisioned from the original edge image, verify the current status:
$ rpm-ostree status
If there is no new commit ID, run the following command to verify if there is any upgrade available:
$ rpm-ostree upgrade --check
The command output provides the current active OSTree commit ID.
Update OSTree to make the new OSTree commit ID available:
$ rpm-ostree upgrade
OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of the new commit update.
Check the current status again:
$ rpm-ostree status
You should now see that there are 2 commits available:
- The active parent commit
- A new commit that is not active and contains 1 added difference
To activate the new deployment and make the new commit active, reboot your system:
# systemctl reboot
The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot.
-
If you want to boot into the newest deployment, the
rpm-ostree upgrade
command automatically orders the boot entries so that the new deployment is first in the list. Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press . - Log in using your account credentials.
Verify the OSTree status:
$ rpm-ostree status
The command output provides the active commit ID.
To view the changed packages, if any, run a diff between the parent commit and the new commit:
$ rpm-ostree db diff parent_commit new_commit
The update shows that the package you have installed is available and ready for use.
15.2.5. Deploying RHEL for Edge image updates manually for non-network-base deployments
After editing a RHEL for Edge blueprint, you can update your RHEL for Edge Commit image with those updates. Use RHEL image builder to generate a new commit to update your RHEL for Edge image that is already deployed in a VM, for example. Use this new commit to deploy the image with latest package versions or with additional packages.
To deploy RHEL for Edge images updates, ensure that you meet the prerequisites and then follow the procedure.
Prerequisites
- On your host, you have opened the RHEL image builder app from the web console in a browser.
- You have a RHEL for Edge system provisioned that is up and running.
- You have an OSTree repository that is being served over HTTP.
- You have edited a previously created RHEL for Edge image blueprint.
Procedure
- On your system host, on the RHEL image builder dashboard, click .
On the Create Image window, perform the following steps:
In the Image output page:
- From the Select a blueprint dropdown list, select the blueprint that you edited.
-
From the Image output type dropdown list, select
RHEL for Edge Container (.tar)
. - Click .
In the OSTree settings page, enter:
- In the Repository URL field, enter the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image.
- In the Parent commit field, specify the parent commit ID that was previously generated. See Extracting RHEL for Edge image commit.
-
In the Ref field, you can either specify a name for your commit or leave it empty. By default, the web console specifies the
Ref
asrhel/9/arch_name/edge
. - Click .
In the Review page, check the customizations and click .
RHEL image builder creates a RHEL for Edge image for the updated blueprint.
Click the Images tab to view the progress of RHEL for Edge image creation.
NoteThe image creation process takes a few minutes to complete.
The resulting image includes the latest packages that you have added, if any, and has the original
commit ID
as a parent.
Download the resulting RHEL for Edge image on your host:
-
From the Images tab, click to save the RHEL for Edge Container (
.tar
) image to your host system.
-
From the Images tab, click to save the RHEL for Edge Container (
On the RHEL system provisioned from the original edge image, perform the following steps:
Load the RHEL for Edge Container image into Podman, serving the child commit ID this time.
$ cat ./child-commit_ID-container.tar | sudo podman load
Run
Podman
.# sudo podman run -p 8080:8080 localhost/edge-test
Upgrade the OSTree repo:
# ostree --repo=/var/www/html/repo pull-local /tmp/ostree-commit/repo # ostree --repo=/var/www/html/repo summary -u
On the RHEL system provisioned, from the original edge image, verify the current status.
$ rpm-ostree status
If there is no new commit ID, run the following command to verify if there is any upgrade available:
$ rpm-ostree upgrade --check
If there are updates available, the command output provides information about the available updates in the OSTree repository, such as the current active OSTree commit ID. Else, it prompts a message informing that there are no updates available.
Update OSTree to make the new OSTree commit ID available.
$ rpm-ostree upgrade
OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of this new commit update.
Check the current system status:
$ rpm-ostree status
You can now see that there are 2 commits available:
- The active parent commit.
- A new commit that is not active and contains 1 added difference.
To activate the new deployment and to make the new commit active, reboot your system.
# systemctl reboot
The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot.
To boot into the newest commit, run the following command to automatically order the boot entries so that the new deployment is first in the list:
$ rpm-ostree upgrade
Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press
.
- Provide your login user account credentials.
Verify the OSTree status:
$ rpm-ostree status
The command output provides the active commit ID.
To view the changed packages, if any, run a diff between the parent commit and the new commit:
$ rpm-ostree db diff parent_commit new_commit
The update shows that the package you have installed is available and ready for use.
15.3. Upgrading RHEL for Edge systems
15.3.1. Upgrading your RHEL 8 system to RHEL 9
You can upgrade your RHEL 8 system to RHEL 9 by using the rpm-ostree rebase
command. The command fully supports the default package set of RHEL for Edge upgrades from the most recent updates of RHEL 8 to the most recent updates of RHEL 9. The upgrade downloads and installs the RHEL 9 image in the background. After the upgrade finishes, you must reboot your system to use the new RHEL 9 image.
The upgrade does not support every possible rpm
package version and inclusions. You must test your package additions to ensure that these packages works as expected.
Prerequisites
- You have a running RHEL for Edge 8 system
- You have an OSTree repository server by HTTP
- You created a blueprint for RHEL for Edge 9 image that you will upgrade
Procedure
On the system where RHEL image builder runs, create a RHEL for Edge 9 image:
Start the image compose:
$ sudo composer-cli compose start blueprint-name edge-commit
Optionally, you can also create the new RHEL for Edge 9 image by using a pre-existing OSTree repository, by using the following command:
$ sudo composer-cli compose start-ostree --ref rhel/8/x86_64/edge --parent parent-OSTree-REF --url URL blueprint-name edge-commit
- After the compose finishes, download the image.
Extract the downloaded image to
/var/www/html/
folder:$ sudo tar -xf image_file -C /var/www/html
Start the
httpd
service:$ systemctl start httpd.service
On the RHEL for Edge device, check the current remote repository configuration:
$ sudo cat /etc/ostree/remotes.d/edge.conf
NoteDepending on how your Kickstart file is configured, the
/etc/ostree/remotes.d
repository can be empty. If you configured your remote repository, you can see its configuration. For theedge-installer
,raw-image
, andsimplified-installer
images, the remote is configured by default.Check the current URL repository:
$ sudo ostree remote show-url edge
edge is the of the Ostree repository.
List the remote reference branches:
$ ostree remote refs edge
You can see the following output:
Error: Remote refs not available; server has no summary file
To add the new repository:
Configure the URL key to add a remote repository. For example:
$ sudo ostree remote add \ --no-gpg-verify rhel9 http://192.168.122.1/repo/
Configure the URL key to point to the RHEL 9 commit for the upgrade. For example:
$ sudo cat /etc/ostree/remotes.d/edge.conf [remote "edge"] url=http://192.168.122.1/ostree/repo/ gpg-verify=false
Confirm if the URL has been set to the new remote repository:
$ sudo cat /etc/ostree/remotes.d/rhel9.conf [remote "edge"] url=http://192.168.122.1/repo/ gpg-verify=false
See the new URL repository:
$ sudo ostree remote show-url rhel9 http://192.168.122.1/ostree-rhel9/repo/
List the current remote list options:
$ sudo ostree remote list output: edge rhel9
Rebase your system to the RHEL version, providing the reference path for the RHEL 9 version:
$ rpm-ostree rebase rhel9:rhel/9/x86_64/edge
Reboot your system.
$ systemctl reboot
- Enter your username and password.
Check the current system status:
$ rpm-ostree status
Verification
Check the current status of the currently running deployment:
$ rpm-ostree status
Optional: List the processor and tasks managed by the kernel in real-time.
$ top
If the upgrade does not support your requirements, you have the option to manually rollback to the previous stable deployment RHEL 8 version:
$ sudo rpm-ostree rollback
Reboot your system. Enter your username and password:
$ systemctl reboot
After rebooting, your system runs RHEL 9 successfully.
NoteIf your upgrade succeeds and you do not want to use the previous deployment RHEL 8 version, you can delete the old repository:
$ sudo ostree remote delete edge
Additional resources
- rpm-ostree update and rebase fails with failed to find kernel error (Red Hat Knowledgebase)
15.4. Deploying RHEL for Edge automatic image updates
After you install a RHEL for Edge image on an Edge device, you can check for image updates available, if any, and can auto-apply them.
The rpm-ostreed-automatic.service
(systemd service) and rpm-ostreed-automatic.timer
(systemd timer) control the frequency of checks and upgrades. The available updates, if any, appear as staged deployments.
Deploying automatic image updates involves the following high-level steps:
- Update the image update policy
- Enable automatic download and staging of updates
15.4.1. Updating the RHEL for Edge image update policy
To update the image update policy, use the AutomaticUpdatePolicy
and an IdleExitTimeout
setting from the rpm-ostreed.conf
file at /etc/rpm-ostreed.conf
location on an Edge device.
The AutomaticUpdatePolicy
settings controls the automatic update policy and has the following update checks options:
-
none
: Disables automatic updates. By default, theAutomaticUpdatePolicy
setting is set tonone
. -
check
: Downloads enough metadata to display available updates withrpm-ostree
status. -
stage
: Downloads and unpacks the updates that are applied on a reboot.
The IdleExitTimeout
setting controls the time in seconds of inactivity before the daemon exit and has the following options:
- 0: Disables auto-exit.
-
60: By default, the
IdleExitTimeout
setting is set to60
.
To enable automatic updates, perform the following steps:
Procedure
In the
/etc/rpm-ostreed.conf
file, update the following:-
Change the value of
AutomaticUpdatePolicy
tocheck
. -
To run the update checks, specify a value in seconds for
IdleExitTimeout
.
-
Change the value of
Reload the
rpm-ostreed
service and enable thesystemd
timer.# systemctl reload rpm-ostreed # systemctl enable rpm-ostreed-automatic.timer --now
Verify the
rpm-ostree
status to ensure the automatic update policy is configured and time is active.# rpm-ostree status
The command output shows the following:
State: idle; auto updates enabled (check; last run <minutes> ago)
Additionally, the output also displays information about the available updates.
15.4.2. Enabling RHEL for Edge automatic download and staging of updates
After you update the image update policy to check for image updates, the updates if any are displayed along with the update details. If you decide to apply the updates, enable the policy to automatically download and stage the updates. The available image updates are then downloaded and staged for deployment. The updates are applied and take effect when you reboot the Edge device.
To enable the policy for automatic download and staging of updates, perform the following updates:
Procedure
-
In the
/etc/rpm-ostreed.conf
file, update "AutomaticUpdatePolicy" tostage
. Reload the
rpm-ostreed
service.# systemctl enable rpm-ostreed-automatic.timer --now
Verify the
rpm-ostree
status# rpm-ostree status
The command output shows the following:
State: idle AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run <time> ago
To initiate the updates, you can either wait for the timer to initiate the updates, or can manually start the service.
# systemctl start rpm-ostreed-automatic.service
After the updates are initiated, the
rpm-ostree
status shows the following:# rpm-ostree status State: busy AutomaticUpdates: stage; rpm-ostreed-automatic.service: running Transaction: automatic (stage)
When the update is complete, a new deployment is staged in the list of deployments, and the original booted deployment is left untouched. You can decide if you want to boot the system using the new deployment or can wait for the next update.
To view the list of deployments, run the
rpm-ostree status
command.Following is a sample output.
# rpm-ostree status State: idle AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run <time> ago Deployments:
To view the list of deployments with the updated package details, run the
rpm-ostree status -v
command.
15.5. Rolling back RHEL for Edge images
Because RHEL for Edge applies transactional updates to the operating system, you can either manually or automatically roll back the unsuccessful updates to the last known good state, which prevents system failure during updates. You can automate the verification and rollback process by using the greenboot
framework.
The greenboot
health check framework leverages rpm-ostree
to run custom health checks on system startup. In case of an issue, the system rolls back to the last working state. When you deploy a rpm-ostree
update, it runs scripts to check that critical services can still work after the update. If the system does not work, for example, due to some failed package, you can roll back the system to a previous stable version of the system. This process ensures that your RHEL for Edge device is in an operational state.
After you update an image, it creates a new image deployment while preserving the previous image deployment. You can verify whether the update was successful. If the update is unsuccessful, for example, due to a failed package, you can roll back the system to a previous stable version.
15.5.1. Introducing the greenboot checks
Greenboot
is a Generic Health Check Framework for systemd
available on rpm-ostree
based systems. It contains the following RPM packages that you can install on your system:
greenboot
- a package that contains the following functionalities:- Checking provided scripts
- Reboot the system if the check fails
- Rollback to a previous deployment the reboot did not solve the issue.
-
greenboot-default-health-checks
- a set of optional and selected health checks provided by yourgreenboot
system maintainers.
Greenboot
works in a RHEL for Edge system by using health check scripts that run on the system to assess the system health and automate a rollback to the last healthy state in case of some software fails. These health checks scripts are available in the /etc/greenboot/check/required.d
directory. Greenboot
supports shell scripts for the health checks. Having a health check framework is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is either limited or non-existent. When you install and configure health check scripts, it triggers the health checks to run every time the system starts.
You can create your own health check scripts to assess the health of your workloads and applications. These additional health check scripts are useful components of software problem checks and automatic system rollbacks.
You cannot use rollback in case of any health check failure on a system that is not using OSTree.
15.5.2. RHEL for Edge images roll back with greenboot
With RHEL for Edge images, only transactional updates are applied to the operating system. The transactional updates are atomic, which means that the updates are applied only if all the updates are successful, and there is support for rollbacks. With the transactional updates, you can easily rollback the unsuccessful updates to the last known good state, preventing system failure during updates.
Performing health checks is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is limited or non-existent.
You cannot use rollback in case of an update failure on a system that is not using OSTree, even if health checks might run.
You can use intelligent rollbacks with the greenboot
health check framework to automatically assess system health every time the system starts. You can obtain pre-configured health from the greenboot-default-health-checks
subpackage. These checks are located in the /usr/lib/greenboot/check
read-only directory in rpm-ostree
systems.
Greenboot
leverages rpm-ostree
and runs custom health checks that run on system startup. In case of an issue, the system rolls back the changes and preserves the last working state. When you deploy an rpm-ostree
update, it runs scripts to check that critical services can still work after the update. If the system does not work, the update rolls back to the last known working version of the system. This process ensures that your RHEL for Edge device is in an operational state.
You can obtain pre-configured health from the greenboot-default-health-checks`subpackage. These checks are located in the `/usr/lib/greenboot/check
read-only directory in rpm-ostree
systems. You can also configure shell scripts as the following types of checks:
Example 15.1. The greenboot directory structure
etc
└─ greenboot
├─ check
| └─ required.d
| └─ init.py
└─ green.d
└─ red.d
- Required
-
Contains the health checks that must not fail. Place required shell scripts in the
/etc/greenboot/check/required.d
directory. If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the/etc/greenboot/greenboot.conf
file by setting theGREENBOOT_MAX_BOOTS
parameter to the number of retries you want.
After all retries fail, greenboot
automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that you need to perform a manual intervention.
- Wanted
-
Contains the health checks that might fail without causing the system to roll back. Place wanted shell scripts in the
/etc/greenboot/check/wanted.d
directory.Greenboot
informs that the script fails, the system health status remains unaffected and it does not perform a rollback neither a reboot.
You can also specify shell scripts that will run after a check:
- Green
-
Contains the scripts to run after a successful boot. Place these scripts into the
/etc/greenboot/green.d`directory
.Greenboot
informs that the boot was successful. - Red
-
Contains the scripts to run after a failed boot. Place these scripts into the
/etc/greenboot/red.d
directory. The system attempts to boot three times and in case of failure, it executes the scripts.Greenboot
informs that the boot failed.
The following diagram illustrates the RHEL for Edge image roll back process.
After booting the updated operating system, greenboot
runs the scripts in the required.d
and wanted.d
directories. If any of the scripts fail in the required.d
directory, greenboot
runs any scripts in the red.d
directory, and then reboots the system.
Greenboot
makes 2 more attempts to boot on the upgraded system. If during the third boot attempt the scripts in required.d are still failing, greenboot
runs the red.d scripts one last time, to ensure that the script in the red.d directory tried to make a corrective action to fix the issue and this was not successful. Then, greenboot
rollbacks the system from the current rpm-ostree
deployment to the previous stable deployment.
15.5.3. Greenboot health check status
When deploying your updated system, wait until the greenboot health checks have finished before making the changes to ensure that those changes are not lost if greenboot rolls the system back to an earlier state. If you want to make configuration changes or deploy applications you must wait until the greenboot health checks have finished. This ensures that your changes are not lost if greenboot rolls your rpm-ostree
system back to an earlier state.
The greenboot-healthcheck
service runs once and then exits. You can check the status of the service to know if it is done, and to know the outcome, by using the following commands:
systemctl is-active greenboot-healthcheck.service
-
This command reports
active
when the service has exited. If it the service did not even run it showsinactive
. systemctl show --property=SubState --value greenboot-healthcheck.service
-
Reports
exited
when done,running
while still running. systemctl show --property=Result --value greenboot-healthcheck.service
-
Reports
success
when the checks passed. systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
- Reports the numerical exit code of the service, 0 means success and nonzero values mean a failure occurred.
cat /run/motd.d/boot-status
- Shows a message, such as "Boot Status is GREEN - Health Check SUCCESS".
15.5.4. Checking greenboot health checks statuses
Check the status of greenboot health checks before making changes to the system or during troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running.
Use one of the following options to check the statuses:
To see a report of health check status, enter:
$ systemctl show --property=SubState --value greenboot-healthcheck.service
The following outputs are possible:
-
start
means that greenboot checks are still running. -
exited
means that checks have passed and greenboot has exited. Greenboot runs the scripts in thegreen.d
directory when the system is in a healthy state. -
failed
means that checks have not passed. Greenboot runs the scripts inred.d
directory when the system is in this state and might restart the system.
-
To see a report showing the numerical exit code of the service, where
0
means success and nonzero values mean a failure occurred, use the following command:$ systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service
To see a report showing a message about boot status, such as
Boot Status is GREEN - Health Check SUCCESS
, enter:$ cat /run/motd.d/boot-status
15.5.5. Manually rolling back RHEL for Edge images
When you upgrade your operating system, a new deployment is created, and the rpm-ostree
package also keeps the previous deployment. If there are issues on the updated version of the operating system, you can manually roll back to the previous deployment with a single rpm-ostree
command, or by selecting the previous deployment in the GRUB boot loader.
To manually roll back to a previous version, perform the following steps.
Prerequisite
- You updated your system and it failed.
Procedure
Optional: Check for the fail error message:
$ journalctl -u greenboot-healthcheck.service.
Run the
rollback
command:# rpm-ostree rollback
The command output provides details about the commit ID that is being moved and indicates a completed transaction with the details of the package being removed.
Reboot the system.
# systemctl reboot
The command activates the previous commit with the stable content. The changes are applied and the previous version is restored.
15.5.6. Rolling back RHEL for Edge images using an automated process
Greenboot
checks provides a framework that is integrated into the boot process and can trigger rpm-ostree
rollbacks when a health check fails. For the health checks, you can create a custom script that indicates whether a health check passed or failed. Based on the result, you can decide when a rollback should be triggered. The following procedure shows how to create an health check script example:
Procedure
Create a script that returns a standard exit code
0
.For example, the following script ensures that the configured DNS server is available:
#!/bin/bash DNS_SERVER=$(grep ^nameserver /etc/resolv.conf | head -n 1 | cut -f2 -d" ") COUNT=0 # check DNS server is available ping -c1 $DNS_SERVER while [ $? != '0' ] && [ $COUNT -lt 10 ]; do ((COUNT++)) echo "Checking for DNS: Attempt $COUNT ." sleep 10 ping -c 1 $DNS_SERVER done
Include an executable file for the health checks at
/etc/greenboot/check/required.d/
.chmod +x check-dns.sh
During the next reboot, the script is executed as part of the boot process, before the system enters the
boot-complete.target
unit. If the health checks are successful, no action is taken. If the health checks fail, the system will reboot several times, before marking the update as failed and rolling back to the previous update.
Verification
To check if the default gateway is reachable, run the following health check script:
Create a script that returns a standard exit code
0
.#!/bin/bash DEF_GW=$(ip r | awk '/^default/ {print $3}') SCRIPT=$(basename $0) count=10 connected=0 ping_timeout=5 interval=5 while [ $count -gt 0 -a $connected -eq 0 ]; do echo "$SCRIPT: Pinging default gateway $DEF_GW" ping -c 1 -q -W $ping_timeout $DEF_GW > /dev/null 2>&1 && connected=1 || sleep $interval ((--count)) done if [ $connected -eq 1 ]; then echo "$SCRIPT: Default gateway $DEF_GW is reachable." exit 0 else echo "$SCRIPT: Failed to ping default gateway $DEF_GW!" 1>&2 exit 1 fi
Include an executable file for the health checks at
/etc/greenboot/check/required.d/
directory.chmod +x check-gw.sh
Chapter 16. Creating and managing OSTree image updates
You can easily create and manage OStree image updates for your RHEL for Edge systems and make them immediately available to RHEL for Edge devices. With OSTree, you can use image builder to create RHEL for Edge Commit or RHEL for Edge Container images as .tar
files that contain OSTree commits. The OSTree update versioning system works as a “Git repository” that stores and versions the OSTree commits. The rpm-ostree
image and package system then assembles the commits on the client device. When you create a new image with RHEL image builder to perform an update, RHEL image builder pulls updates from these repositories.
16.1. Basic concepts for OSTree
Basic terms that OSTree and rpm-ostree
use during image updates.
rpm-ostree
-
The technology on the edge device that handles how the OSTree commits are assembled on the device. It works as a hybrid between an image and a package system. With the
rpm-ostree
technology, you can make atomic upgrades and rollbacks to your system. - OSTree
- OSTree is a technology that enables you to create commits and download bootable file system trees. You can also use it to deploy the trees and manage the bootloader configuration.
- Commit
- An OSTree commit contains a full operating system that is not directly bootable. To boot the system, you must deploy it, for example, with a RHEL Installable image.
- Reference
It is also known as
ref
. An OSTree ref is similar to a Git branch and it is a name. The following reference names examples are valid:-
rhel/9/x86_64/edge
-
ref-name
-
app/org.gnome.Calculator/x86_64/stable
-
ref-name-2
-
By default, image builder specifies rhel/9/$ARCH/edge
as a path. The "$ARCH" value is determined by the host machine.
- Parent
-
The
parent
argument is an OSTree commit that you can provide to build a new commit with image builder. You can use theparent
argument to specify an existingref
that retrieves a parent commit for the new commit that you are building. You must specify the parent commit as a ref value to be resolved and pulled, for examplerhel/9/x86_64/edge
. You can use the--parent
commit for the RHEL for Edge Commit (.tar
) and RHEL for Edge Container (.tar
) image types. - Remote
- The http or https endpoint that hosts the OSTree content. This is analogous to the baseurl for a yum repository.
- Static delta
- Static deltas are a collection of updates generated between two OSTree commits. This enables the system client to fetch a smaller amount of files, which are larger in size. The static deltas updates are more network efficient because, when updating an ostree-based host, the system client will only fetch the objects from the new OSTree commit which do not exist on the system. Typically, the new OSTree commit contains many small files, which requires multiple TCP connections.
- Summary
- The summary file is a concise way of enumerating refs, checksums, and available static deltas in an OSTree repo. You can check the state of all the refs and static deltas available in an Ostree repo. However, you must generate the summary file every time a new ref, commit, or static-delta is added to the OSTree repo.
16.2. Creating OSTree repositories
You can create OSTree repos with RHEL image builder by using either RHEL for Edge Commit (.tar)
or RHEL for Edge Container (.tar)
image types. These image types contain an OSTree repo that contains a single OSTree commit.
-
You can extract the
RHEL for Edge Commit (.tar)
on a web server and it is ready to be served. -
You must import the
RHEL for Edge Container (.tar)
to a local container image storage or push the image to a container registry. After you start the container, it serves the commit over an integratednginx
web server.
Use the RHEL for Edge Container (.tar)
on a RHEL server with Podman to create an OSTree repo:
Prerequisite
-
You created a
RHEL for Edge Container (.tar)
image.
Procedure
Download the container image from image builder:
$ composer-cli compose image _<UUID>
Import the container into Podman:
$ skopeo copy oci-archive:_<UUID>_-container.tar containers-storage:localhost/ostree
Start the container and make it available by using the port
8080
:$ podman run -rm -p 8080:8080 ostree
Verification
Check that the container is running:
$ podman ps -a
16.3. Managing a centralized OSTree mirror
For production environments, having a central OSTree mirror that serves all the commits has several advantages, including:
- Deduplicating and minimizing disk storage
- Optimizing the updates to clients by using static delta updates
- Pointing to a single OSTree mirror for their deployment life.
To manage a centralized OSTree mirror, you must pull each commit from image builder into the centralized repository where it will be available to your users.
You can also automate managing an OSTree mirror by using the infra.osbuild
Ansible collection. See osbuild.infra Ansible.
To create a centralized repository you can run the following commands directly on a web server:
Procedure
Create an empty blueprint, customizing it to use "rhel-92" as the distro:
name = "minimal-rhel92" description = "minimal blueprint for ostree commit" version = "1.0.0" modules = [] groups = [] distro = "rhel-92"
Push the blueprint to the server:
# composer-cli blueprints push minimal-rhel92.toml
Build a RHEL for Edge Commit (
.tar
) image from the blueprint you created:# composer-cli compose start-ostree minimal-rhel92 edge-commit
Retrieve the
.tar file
and decompress it to the disk:# composer-cli compose image _<rhel-92-uuid> $ tar -xf <rhel-92-uuid>.tar -C /usr/share/nginx/html/
The
/usr/share/nginx/html/repo
location on disk will become the single OSTree repo for all refs and commits.Create another empty blueprint, customizing it to use "rhel-87" as the distro:
name = "minimal-rhel87" description = "minimal blueprint for ostree commit" version = "1.0.0" modules = [] groups = [] distro = "rhel-87"
Push the blueprint and create another RHEL for Edge Commit (
.tar
) image:# composer-cli blueprints push minimal-rhel87.toml # composer-cli compose start-ostree minimal-rhel87 edge-commit
Retrieve the
.tar file
and decompress it to the disk:# composer-cli compose image <rhel-87-uuid> $ tar -xf <rhel-87-uuid>.tar
Pull the commit to the local repo. By using
ostree pull-local
, you can copy the commit data from one local repo to another local repo.# ostree --repo=/usr/share/nginx/html/repo pull-local repo
Optional: Inspect the status of the OSTree repo. The following is an output example:
$ ostree --repo=/usr/share/nginx/html/repo refs rhel/8/x86_64/edge rhel/9/x86_64/edge $ ostree --repo=/usr/share/nginx/html/repo show rhel/8/x86_64/edge commit f7d4d95465fbd875f6358141f39d0c573df6a321627bafde68c73850667e5443 ContentChecksum: 41bf2f8b442a770e9bf03e096a46a286f5836e0a0702b7c3516ef4e0acec2dea Date: 2023-09-15 16:17:04 +0000 Version: 8.7 (no subject) $ ostree --repo=/usr/share/nginx/html/repo show rhel/9/x86_64/edge commit 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 70235bfb9cae82c53f856183750e809becf0b9b076122b19c40fec92fc6d74c1 Date: 2023-09-15 15:30:24 +0000 Version: 9.2 (no subject)
Update the RHEL 9.2 blueprint to include a new package and build a new commit, for example:
name = "minimal-rhel92" description = "minimal blueprint for ostree commit" version = "1.1.0" modules = [] groups = [] distro = "rhel-92" [[packages]] name = "strace" version = "*"
Push the updated blueprint and create a new RHEL for Edge Commit (
.tar
) image, pointing the compose to the existing OSTree repo:# composer-cli blueprints push minimal-rhel92.toml # composer-cli compose start-ostree minimal-rhel92 edge-commit --url http://localhost/repo --ref rhel/9/x86_64/edge
Retrieve the
.tar
file and decompress it to the disk:# rm -rf repo # composer-cli compose image <rhel-92-uuid> # tar -xf <rhel-92-uuid>.tar
Pull the commit to repo:
# ostree --repo=/usr/share/nginx/html/repo pull-local repo
Optional: Inspect the OSTree repo status again:
$ ostree --repo=/usr/share/nginx/html/repo refs rhel/8/x86_64/edge rhel/9/x86_64/edge $ ostree --repo=/usr/share/nginx/html/repo show rhel/8/x86_64/edge commit f7d4d95465fbd875f6358141f39d0c573df6a321627bafde68c73850667e5443 ContentChecksum: 41bf2f8b442a770e9bf03e096a46a286f5836e0a0702b7c3516ef4e0acec2dea Date: 2023-09-15 16:17:04 +0000 Version: 8.7 (no subject) $ ostree --repo=/usr/share/nginx/html/repo show rhel/9/x86_64/edge commit a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Parent: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 2335930df6551bf7808e49f8b35c45e3aa2a11a6c84d988623fd3f36df42a1f1 Date: 2023-09-15 18:21:31 +0000 Version: 9.2 (no subject) $ ostree --repo=/usr/share/nginx/html/repo log rhel/9/x86_64/edge commit a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Parent: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 2335930df6551bf7808e49f8b35c45e3aa2a11a6c84d988623fd3f36df42a1f1 Date: 2023-09-15 18:21:31 +0000 Version: 9.2 (no subject) commit 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 70235bfb9cae82c53f856183750e809becf0b9b076122b19c40fec92fc6d74c1 Date: 2023-09-15 15:30:24 +0000 Version: 9.2 (no subject) $ rpm-ostree db diff --repo=/usr/share/nginx/html/repo 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 ostree diff commit from: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ostree diff commit to: a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Added: elfutils-default-yama-scope-0.188-3.el9.noarch elfutils-libs-0.188-3.el9.x86_64 strace-5.18-2.el9.x86_64
Appendix A. Terminology and commands
Learn more about the rpm ostree
terminology and commands.
A.1. OSTree and rpm-ostree
terminology
Following are some helpful terms that are used in context to OSTree and rpm-ostree
images.
Term | Definition |
---|---|
| A tool used for managing Linux-based operating system versions. The OSTree tree view is similar to Git and is based on similar concepts. |
| A hybrid image or system package that hosts operating system updates. |
| A release or image version of the operating system. RHEL image builder generates an OSTree commit for RHEL for Edge images. You can use these images to install or update RHEL on Edge servers. |
|
Represents a branch in OSTree. Refs always resolve to the latest commit. For example, |
| SHA-256 for a specific commit. |
| The http or https endpoint that hosts the OSTree content. This is analogous to the baseurl for a dnf repository. |
| Updates to OSTree images are always delta updates. In case of RHEL for Edge images, the TCP overhead can be higher than expected due to the updates to number of files. To avoid TCP overhead, you can generate static-delta between specific commits, and send the update in a single connection. This optimization helps large deployments with constrained connectivity. |
A.2. OSTree commands
The following table provides a few OSTree commands that you can use when installing or managing OSTree images.
ostree pull |
|
ostree summary |
|
View refs |
|
View commits in repo |
|
Inspect a commit |
|
List remotes of a repo |
|
Resolve a REV |
|
Create static-delta |
|
Sign an |
|
A.3. rpm-ostree
commands
The following table provides a few rpm-ostree
commands that you can use when installing or managing OSTree images.
Commands | Description |
---|---|
| This command lists the packages existing in the <REV> commit into the repository. |
|
OSTree manages an ordered list of boot loader entries, called |
|
This command gives information about the current deployment in use. Lists the names and |
| Use this command to see which packages are within the commit or commits. You must specify at least one commit, but more than one or a range of commits also work. |
| Use this command to show how the packages are different between the trees in two revs (revisions). If no revs are provided, the booted commit is compared to the pending commit. If only a single rev is provided, the booted commit is compared to that rev. |
| This command downloads the latest version of the current tree, and deploys it, setting up the current tree as the default for the next boot. This has no effect on your running filesystem tree. You must reboot for any changes to take effect. |
Additional resources
-
rpm-ostree
man page on your system
A.4. FDO automatic onboarding terminology
Learn more about the FDO terminology.
Commands | Description |
---|---|
FDO | FIDO Device Onboarding. |
Device | Any hardware, device, or computer. |
Owner | The final owner of the device - a company or an IT department. |
Manufacturer | The device manufacturer. |
Manufacturer server | Creates the device credentials for the device. |
Manufacturer client | Informs the location of the manufacturing server. |
Ownership Voucher (OV) | Record of ownership of an individual device. Contains the following information:
* Owner (
* Rendezvous Server - FIDO server (
* Device (at least one combination) ( |
Device Credential (DC) | Key credential and rendezvous stored in the device at manufacture. |
Keys | Keys to configure the manufacturing server * key_path * cert_path * key_type * mfg_string_type: device serial number * allowed_key_storage_types: Filesystem and Trusted Platform Module (TPM) that protects the data used to authenticate the device you are using. |
Rendezvous server | Link to a server used by the device and later on, used on the process to find out who is the owner of the device |
Additional resources
A.5. FDO automatic onboarding technologies
Following are the technologies used in context to FDO automatic onboarding.
Technology | Definition |
---|---|
UEFI | Unified Extensible Firmware Interface. |
RHEL | Red Hat® Enterprise Linux® operating system |
| Background image-based upgrades. |
Greenboot |
|
Osbuild | Pipeline-based build system for operating system artifacts. |
Container | A Linux® container is a set of 1 or more processes that are isolated from the rest of the system. |
Coreos-installer | Assists installation of RHEL images, boots systems with UEFI. |
FIDO FDO | Specification protocol to provision configuration and onboarding devices. |