Installing with RHEL image mode
Embedding MicroShift in a bootc image
Abstract
Chapter 1. Using image mode for RHEL with MicroShift
You can embed MicroShift into an operating system image using image mode for Red Hat Enterprise Linux (RHEL).
Image mode for RHEL is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.1. Image mode for Red Hat Enterprise Linux (RHEL)
Image mode for Red Hat Enterprise Linux (RHEL) is a Technology Preview deployment method that uses a container-native approach to build, deploy, and manage the operating system as a bootc image. By using bootc, you can build, deploy, and manage the operating system as if it is any other container.
- This container image uses standard OCI or Docker containers as a transport and delivery format for base operating system updates.
- A bootc image includes a Linux kernel that is used to start the operating system.
- By using bootc containers, developers, operations administrators, and solution providers can all use the same container-native tools and techniques.
Image mode splits the creation and installation of software changes into two steps: one on a build system and one on a running target system.
- In the build-system step, a Podman build inspects the RPM files available for installation, determines any dependencies, and creates an ordered list of chained steps to complete, with the end result being a new operating system available to install.
- In the running-target-system step, a bootc update downloads, unpacks, and makes the new operating system bootable alongside the currently running system. Local configuration changes are carried forward to the new operating system, but do not take effect until the system is rebooted and the new operating system image replaces the running image.
1.1.1. Using image mode for RHEL with MicroShift
To use image mode for RHEL, ensure that the following resources are available:
- A RHEL 9.4 host with an active Red Hat subscription for building MicroShift bootc images.
- A remote registry for storing and accessing bootc images.
- You can use image mode for RHEL with a MicroShift cluster on AArch64 or x86_64 system architectures.
The workflow for using image mode with MicroShift includes the following steps:
- Build the MicroShift bootc image.
- Publish the image.
- Run the image. This step includes configuring MicroShift networking and storage.
The rpm-ostree
file system is not supported in image mode and must not be used to make changes to deployments that use image mode.
1.2. Building the bootc image
Build your Red Hat Enterprise Linux (RHEL) that contains MicroShift as a bootable container image by using a Containerfile.
Image mode for RHEL is Technology Preview. Using a bootc image in production environments is not supported.
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions. -
The
rhocp
andfast-datapath
repositories are accessible in the host subscription. The repositories do not necessarily need to be enabled on the host. - You have a remote registry such as Red Hat quay for storing and accessing bootc images.
Procedure
Create a Containerfile that includes the following instructions:
Example Containerfile for RHEL image mode
FROM registry.redhat.io/rhel9/rhel-bootc:9.4 ARG USHIFT_VER=4.17 RUN dnf config-manager \ --set-enabled rhocp-${USHIFT_VER}-for-rhel-9-$(uname -m)-rpms \ --set-enabled fast-datapath-for-rhel-9-$(uname -m)-rpms RUN dnf install -y firewalld microshift && \ systemctl enable microshift && \ dnf clean all # Create a default 'redhat' user with the specified password. # Add it to the 'wheel' group to allow for running sudo commands. ARG USER_PASSWD RUN if [ -z "${USER_PASSWD}" ] ; then \ echo USER_PASSWD is a mandatory build argument && exit 1 ; \ fi RUN useradd -m -d /var/home/redhat -G wheel redhat && \ echo "redhat:${USER_PASSWD}" | chpasswd # Mandatory firewall configuration RUN firewall-offline-cmd --zone=public --add-port=22/tcp && \ firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 && \ firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 # Create a systemd unit to recursively make the root filesystem subtree # shared as required by OVN images RUN cat > /etc/systemd/system/microshift-make-rshared.service <<'EOF' [Unit] Description=Make root filesystem shared Before=microshift.service ConditionVirtualization=container [Service] Type=oneshot ExecStart=/usr/bin/mount --make-rshared / [Install] WantedBy=multi-user.target EOF RUN systemctl enable microshift-make-rshared.service
ImportantPodman uses the host subscription information and repositories inside the container when building the container image. If the
rhocp
andfast-datapath
repositories are not available on the host, the build fails.Create a local bootc image by running the following image build command:
PULL_SECRET=~/.pull-secret.json USER_PASSWD=<your_redhat_user_password> 1 IMAGE_NAME=microshift-4.17-bootc $ sudo podman build --authfile "${PULL_SECRET}" -t "${IMAGE_NAME}" \ --build-arg USER_PASSWD="${USER_PASSWD}" \ -f Containerfile
- 1
- Replace <your_redhat_user_password> with your password.
NoteHow secrets are used during the image build:
-
The podman
--authfile
argument is required to pull the baserhel-bootc:9.4
image from theregistry.redhat.io
registry. -
The build
USER_PASSWD
argument is used to set a password for theredhat
user.
Verification
Verify that the local bootc image was created by running the following command:
$ sudo podman images "${IMAGE_NAME}"
Example output
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/microshift-4.17-bootc latest 193425283c00 2 minutes ago 2.31 GB
1.3. Publishing the bootc image to the remote registry
Publish your bootc image to the remote registry so that the image can be used for running the container on another host, or for when you want to install a new operating system with the bootc image layer.
Prerequisites
-
You are logged in to the RHEL 9.4 host where the image was built using the user credentials that have
sudo
permissions. - You have a remote registry such as Red Hat quay for storing and accessing bootc images.
- You created the Containerfile and built the image.
Procedure
Log in to your remote registry by running the following command:
REGISTRY_URL=quay.io $ sudo podman login "${REGISTRY_URL}" 1
- 1
- Replace REGISTRY_URL with the URL for your registry.
Publish the image by running the following command:
REGISTRY_IMG=<myorg/mypath>/"${IMAGE_NAME}" 1 2 IMAGE_NAME=<microshift-4.17-bootc> 3 $ sudo podman push localhost/"${IMAGE_NAME}" "${REGISTRY_URL}/${REGISTRY_IMG}"
Verification
- Run the container using the image you pushed to your registry as described in the "Running the MicroShift bootc container" section.
1.4. Configuring networking and storage to run MicroShift in a bootc container
1.4.1. Configuring the MicroShift bootc image for the CNI
Configure a workaround for a kernel mismatch to enable the MicroShift OVN-Kubernetes (OVN-K) Container Network Interface (CNI).
The MicroShift OVN-K CNI requires the openvswitch
kernel module to be available in the bootc image. A bootc image is started as a container that uses the host kernel. The host kernel might be different from the one used for building the image, resulting in a mismatch. A kernel version mismatch with the modules present in the /lib/modules
directory means that the openvswitch
module cannot be loaded in the container and the container fails to run. You can work around this problem by pre-loading the openvswitch
module on the host.
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions.
Procedure
Check if the
openvswitch
module is available for the current host kernel version by running the following command:$ find /lib/modules/$(uname -r) -name "openvswitch*"
Example output
/lib/modules/5.14.0-427.28.1.el9_4.x86_64/kernel/net/openvswitch /lib/modules/5.14.0-427.28.1.el9_4.x86_64/kernel/net/openvswitch/openvswitch.ko.xz
Set the
IMAGE_NAME
environment variable by running the following command:$ IMAGE_NAME=microshift-4.17-bootc
Check the version of the kernel-core
package
used in the bootc image by running the following command:$ sudo podman inspect "${IMAGE_NAME}" | grep kernel-core
Example output
"created_by": "kernel-core-5.14.0-427.26.1.el9_4.x86_64" 1
- 1
- The kernel version does not match the output of the
/lib/modules
directory.
Preinstall the
openvswitch
module on the host as a workaround by running the following command:$ sudo modprobe openvswitch
1.4.2. Checking the host configuration for the MicroShift CSI
If the host is already configured to have a volume group (VG) with free space, that configuration is inherited by the container. The VG can be used by the MicroShift Logical Volume Manager (LVM) Container Storage Interface (CSI) to allocate storage.
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions.
Procedure
Determine if the volume group exists and it has the necessary free space by running the following command:
$ sudo vgs
Example output for existing volume group
VG #PV #LV #SN Attr VSize VFree 1 rhel 1 1 0 wz--n- <91.02g <2.02g
- 1
- If no free volume group is present, the output is either empty or the
VFree
value is 0.
NoteIf the host has a volume group with the available needed storage, you can run your container now. You do not need to configure the MicroShift CSI further.
ImportantIf you do not have a volume group, you must set one up for the LVM CSI to allocate storage in bootc MicroShift containers. See the additional resources for more information.
Additional resources
1.4.3. Configuring container storage for the bootc image and the MicroShift CSI
Use this procedure when a volume group with the necessary storage is not available on the host. Create and allocate an additional LVM volume group loop device to be shared with the container and used by the MicroShift Logical Volume Manager (LVM) Container Storage Interface (CSI).
If you found a volume group with adequate space when you checked the host configuration for the MicroShift CSI, you do not need to complete this step. Proceed to running your bootc image.
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions.
Procedure
Create a file to be used for LVM partitioning by running the following command:
VGFILE=/var/lib/microshift-lvm-storage.img sudo losetup -f "${VGFILE}"
Resize the file to your specified size by running the following command:
VGSIZE=1G 1 $ sudo truncate --size="${VGSIZE}" "${VGFILE}"
- 1
- Specify the size for the file.
Query the loop device name by running the following command:
$ VGLOOP=$(losetup -j ${VGFILE} | cut -d: -f1)
Create a free volume group on the loop device by running the following command:
$ sudo vgcreate -f -y <my-volume-group> "${VGLOOP}" 1
- 1
- Replace <my-volume-group> with the name of the volume group you are creating.
1.5. Running the MicroShift bootc container
Run your MicroShift bootc container to explore its reduced complexity and experiment with new capabilities and dependencies.
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions. - You have a pull secret file for downloading the required MicroShift container images.
Procedure
Run the container for the MicroShift service by entering the following command:
PULL_SECRET=~/.pull-secret.json IMAGE_NAME=microshift-4.17-bootc $ sudo podman run --rm -it --privileged \ -v "${PULL_SECRET}":/etc/crio/openshift-pull-secret:ro \ -v /var/lib/containers/storage:/var/lib/containers/storage \ --name "${IMAGE_NAME}" \ "${IMAGE_NAME}"
NoteThe
systemd-modules-load
service fails to start in the container if the host kernel version is different from the bootc image kernel version. This failure can be safely ignored as all the necessary kernel modules have been loaded by the host.- A login prompt is presented in the terminal after MicroShift has started.
- Log into the running container using the your user credentials.
Verify that all the MicroShift pods are up and running without errors by running the following command:
$ watch sudo oc get pods -A \ --kubeconfig /var/lib/microshift/resources/kubeadmin/kubeconfig
Example output
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system csi-snapshot-controller-7cfb9df49c-kc9dx 1/1 Running 0 31s kube-system csi-snapshot-webhook-5c6b978878-jzk5r 1/1 Running 0 28s openshift-dns dns-default-rpnlt 2/2 Running 0 14s openshift-dns node-resolver-rxvdk 1/1 Running 0 31s openshift-ingress router-default-69cd7b5545-7zcw7 1/1 Running 0 29s openshift-ovn-kubernetes ovnkube-master-c7hlh 4/4 Running 1 (16s ago) 31s openshift-ovn-kubernetes ovnkube-node-mkpht 1/1 Running 1 (17s ago) 31s openshift-service-ca service-ca-5d5d96459d-5pd5s 1/1 Running 0 28s openshift-storage topolvm-controller-677cbfcdb9-28dqr 5/5 Running 0 31s openshift-storage topolvm-node-6fzbl 3/3 Running 0 14s
- Now you can use your MicroShift running in the container the same way you can use any other MicroShift cluster.
1.5.1. Accessing the MicroShift cluster remotely
Use the following procedure to access the MicroShift cluster from a remote location by using a kubeconfig
file.
The user@workstation
login is used to access the host machine remotely. The <user>
value in the procedure is the name of the user that user@workstation
logs in with to the MicroShift host.
Prerequisites
-
You have installed the
oc
binary. -
The
user@microshift
has opened the firewall from the local host.
Procedure
As
user@workstation
, create a~/.kube/
folder if your Red Hat Enterprise Linux (RHEL) machine does not have one by running the following command:[user@workstation]$ mkdir -p ~/.kube/
As
user@workstation
, set a variable for the hostname of your MicroShift host by running the following command:[user@workstation]$ MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
As
user@workstation
, copy the generatedkubeconfig
file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command:[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
NoteTo generate the
kubeconfig
files for this step, see Generating additional kubeconfig files for remote access.As
user@workstation
, update the permissions on your~/.kube/config
file by running the following command:$ chmod go-r ~/.kube/config
Verification
As
user@workstation
, verify that MicroShift is running by entering the following command:[user@workstation]$ oc get all -A
1.6. Cleaning up container storage for the bootc image and the MicroShift CSI
You can remove an additional LVM volume group loop device that was shared with the container and used by the MicroShift Logical Volume Manager (LVM) Container Storage Interface (CSI).
Prerequisites
- A Red Hat Enterprise Linux (RHEL) 9.4 host with an active Red Hat subscription for building MicroShift bootc images and running containers.
-
You are logged into the RHEL 9.4 host using the user credentials that have
sudo
permissions.
Procedure
Clean up the loop device and additional volume group by using the following commands:
Detach the loop device by entering the following command:
$ sudo losetup -d "${VGLOOP}"
Delete the LVM volume group file by entering the following command:
$ sudo rm -f "${VGFILE}"