Chapter 4. Embedding in a RHEL for Edge image
You can embed MicroShift into a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. Use this guide to build a RHEL image containing MicroShift.
4.1. System requirements for installing MicroShift
The following conditions must be met prior to installing MicroShift:
- A compatible version of Red Hat Enterprise Linux (RHEL). For more information, see the "Compatibility table" section.
- AArch64 or x86_64 system architecture.
- 2 CPU cores.
- 2 GB RAM. Installing from the network (UEFI HTTPs or PXE boot) requires 3 GB RAM for RHEL.
- 10 GB of storage.
- You have an active MicroShift subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
- If your workload requires Persistent Volumes (PVs), you have a Logical Volume Manager (LVM) Volume Group (VG) with sufficient free capacity for the workloads.
These requirements are the minimum system requirements for MicroShift and Red Hat Enterprise Linux (RHEL). Add the system requirements for the workload you plan to run.
For example, if an IoT gateway solution requires 4 GB of RAM, your system needs to have at least 2 GB for Red Hat Enterprise Linux (RHEL) and MicroShift, plus 4 GB for the workloads. 6 GB of RAM is required in total.
It is recommended to allow for extra capacity for future needs if you are deploying physical devices in remote locations. If you are uncertain of the RAM required and if the budget permits, use the maximum RAM capacity that the device can support.
Ensure you configure secure access to the system to be able to manage it accordingly. For more information, see Using secure communications between two systems with OpenSSH.
Plan to use a supported version of RHEL paired with your version of MicroShift as described in the following table.
4.2. Compatibility table
Plan to pair a supported version of RHEL for Edge with the MicroShift version you are using as described in the following compatibility table:
Red Hat Device Edge release compatibility matrix
Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. For example, an update of MicroShift from 4.14 to 4.16 requires a {op-system} update. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table:
RHEL for Edge Version(s) | MicroShift Version | MicroShift Release Status | Supported MicroShift Version→MicroShift Version Updates |
9.4 | 4.16 | Generally Available | 4.16.0→4.16.z, 4.14→4.16 and 4.15→4.16 |
9.2, 9.3 | 4.15 | Generally Available | 4.15.0→4.15.z, 4.14→4.15 and 4.15→4.16 |
9.2, 9.3 | 4.14 | Generally Available | 4.14.0→4.14.z, 4.14→4.15 and 4.14→4.16 |
9.2 | 4.13 | Technology Preview | None |
8.7 | 4.12 | Developer Preview | None |
4.3. Preparing for image building
Read Composing, installing, and managing RHEL for Edge images.
To build an Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.4 image for a given CPU architecture, you need a RHEL 9.4 build host of the same CPU architecture that meets the Image Builder system requirements.
Follow the instructions in Installing Image Builder to install Image Builder and the composer-cli
tool.
4.4. Adding MicroShift repositories to Image Builder
Use the following procedure to add the MicroShift repositories to Image Builder on your build host.
Prerequisites
- Your build host meets the Image Builder system requirements.
-
You have installed and set up Image Builder and the
composer-cli
tool. - You have root-user access to your build host.
Procedure
Create an Image Builder configuration file for adding the
rhocp-4.16
RPM repository source required to pull MicroShift RPMs by running the following command:cat > rhocp-4.16.toml <<EOF id = "rhocp-4.16" name = "Red Hat OpenShift Container Platform 4.16 for RHEL 9" type = "yum-baseurl" url = "https://cdn.redhat.com/content/dist/layered/rhel9/$(uname -m)/rhocp/4.16/os" check_gpg = true check_ssl = true system = false rhsm = true EOF
Create an Image Builder configuration file for adding the
fast-datapath
RPM repository by running the following command:cat > fast-datapath.toml <<EOF id = "fast-datapath" name = "Fast Datapath for RHEL 9" type = "yum-baseurl" url = "https://cdn.redhat.com/content/dist/layered/rhel9/$(uname -m)/fast-datapath/os" check_gpg = true check_ssl = true system = false rhsm = true EOF
Add the sources to the Image Builder by running the following commands:
$ sudo composer-cli sources add rhocp-4.16.toml
$ sudo composer-cli sources add fast-datapath.toml
Verification
Confirm that the sources were added properly by running the following command:
$ sudo composer-cli sources list
Example output
appstream baseos fast-datapath rhocp-4.16
Additional resources
4.5. Adding the MicroShift service to a blueprint
Adding the MicroShift RPM package to an Image Builder blueprint enables the build of a RHEL for Edge image with MicroShift embedded.
- Start with step 1 to create your own minimal blueprint file which results in a faster MicroShift installation.
Start with step 2 to use the generated blueprint for installation which includes all the RPM packages and container images. This is a longer installation process, but a faster start up because container references are accessed locally.
Important- Replace <microshift_blueprint.toml> in the following procedures with the name of the TOML file you are using.
- Replace <microshift_blueprint> in the following procedures with the name you want to use for your blueprint.
Procedure
Use the following example to create your own blueprint file:
Custom Image Builder blueprint example
cat > <microshift_blueprint.toml> <<EOF 1 name = "<microshift_blueprint>" 2 description = "" version = "0.0.1" modules = [] groups = [] [[packages]] name = "microshift" version = "*" [customizations.services] enabled = ["microshift"] EOF
NoteThe wildcard
*
in the commands uses the latest MicroShift RPMs. If you need a specific version, substitute the wildcard for the version you want. For example, insert4.16.0
to download the MicroShift 4.16.0 RPMs.Optional. Use the blueprint installed in the
/usr/share/microshift/blueprint
directory that is specific to your platform architecture. See the following example snippet for an explanation of the blueprint sections:Generated Image Builder blueprint example snippet
name = "microshift_blueprint" description = "MicroShift 4.16.1 on x86_64 platform" version = "0.0.1" modules = [] groups = [] [[packages]] 1 name = "microshift" version = "4.16.1" ... ... [customizations.services] 2 enabled = ["microshift"] [customizations.firewall] ports = ["22:tcp", "80:tcp", "443:tcp", "5353:udp", "6443:tcp", "30000-32767:tcp", "30000-32767:udp"] ... ... [[containers]] 3 source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4" [[containers]] source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd" ... … EOF
- 1
- References for all non-optional MicroShift RPM packages using the same version compatible with the
microshift-release-info
RPM. - 2
- References for automatically enabling MicroShift on system startup and applying default networking settings.
- 3
- References for all non-optional MicroShift container images necessary for an offline deployment.
Add the blueprint to the Image Builder by running the following command:
$ sudo composer-cli blueprints push <microshift_blueprint.toml> 1
- 1
- Replace <microshift_blueprint.toml> with the name of your TOML file.
Verification
Verify the Image Builder configuration listing only MicroShift packages by running the following command:
$ sudo composer-cli blueprints depsolve <microshift_blueprint> | grep microshift 1
- 1
- Replace <microshift_blueprint> with the name of your blueprint.
Example output
blueprint: microshift_blueprint v0.0.1 microshift-greenboot-4.16.1-202305250827.p0.g4105d3b.assembly.4.16.1.el9.noarch microshift-networking-4.16.1-202305250827.p0.g4105d3b.assembly.4.16.1.el9.x86_64 microshift-release-info-4.16.1-202305250827.p0.g4105d3b.assembly.4.16.1.el9.noarch microshift-4.16.1-202305250827.p0.g4105d3b.assembly.4.16.1.el9.x86_64 microshift-selinux-4.16.1-202305250827.p0.g4105d3b.assembly.4.16.1.el9.noarch
Optional: Verify the Image Builder configuration listing all components to be installed by running the following command:
$ sudo composer-cli blueprints depsolve <microshift_blueprint> 1
- 1
- Replace <microshift_blueprint> with the name of your blueprint.
4.6. Adding other packages to a blueprint
Add the references for optional RPM packages to your OSTree blueprint to enable them.
Prerequisites
- You created an Image Builder blueprint file.
Procedure
Edit your OSTree blueprint by running the following command:
$ vi <microshift_blueprint.toml> 1
- 1
- Replace <microshift_blueprint.toml> with the name of the blueprint file used for the MicroShift service.
Add the following example text to your blueprint:
[[packages]] 1 name = "<microshift-additional-package-name>" 2 version = "*"
Next steps
- Add custom certificate authorities to the blueprint as needed.
After you are done adding to your blueprint, you can apply the manifests to an active cluster by building a new OSTree system and deploying it on the client:
- Create the ISO.
- Add the blueprint and build the ISO.
- Download the ISO and prepare it for use.
- Do any provisioning that is needed.
4.7. Adding a certificate authority bundle
MicroShift uses the host trust bundle when clients evaluate server certificates. You can also use a customized security certificate chain to improve the compatibility of your endpoint certificates with clients specific to your deployments. To do this, you can add a certificate authority (CA) bundle with root and intermediate certificates to the Red Hat Enterprise Linux for Edge (RHEL for Edge) system-wide trust store.
4.7.1. Adding a certificate authority bundle to an rpm-ostree image
You can include additional trusted certificate authorities (CAs) to the Red Hat Enterprise Linux for Edge (RHEL for Edge) rpm-ostree
image by adding them to the blueprint that you use to create the image. Using the following procedure sets up additional CAs to be trusted by the operating system when pulling images from an image registry.
This procedure requires you to configure the CA bundle customizations in the blueprint, and then add steps to your kickstart file to enable the bundle. In the following steps, data
is the key, and <value>
represents the PEM-encoded certificate.
Prerequisites
- You have root user access to your build host.
- Your build host meets the Image Builder system requirements.
-
You have installed and set up Image Builder and the
composer-cli
tool.
Procedure
Add the following custom values to your blueprint to add a directory.
Add instructions to your blueprint on the host where the image is built to create the directory, for example,
/etc/pki/ca-trust/source/anchors/
for your certificate bundles.[[customizations.directories]] path = "/etc/pki/ca-trust/source/anchors"
After the image has booted, create the certificate bundles, for example,
/etc/pki/ca-trust/source/anchors/cert1.pem
:[[customizations.files]] path = "/etc/pki/ca-trust/source/anchors/cert1.pem" data = "<value>"
To enable the certificate bundle in the system-wide trust store configuration, use the
update-ca-trust
command on the host where the image you are using has booted, for example:$ sudo update-ca-trust
The update-ca-trust
command might be included in the %post
section of a kickstart file used for MicroShift host installation so that all the necessary certificate trust is enabled on the first boot. You must configure the CA bundle customizations in the blueprint before adding steps to your kickstart file to enable the bundle.
%post # Update certificate trust storage in case new certificates were # installed at /etc/pki/ca-trust/source/anchors directory update-ca-trust %end
4.8. Creating the RHEL for Edge image
Use the following procedure to create the ISO. The RHEL for Edge Installer image pulls the commit from the running container and creates an installable boot ISO with a Kickstart file configured to use the embedded rpm-ostree
commit.
Prerequisites
- Your build host meets the Image Builder system requirements.
-
You have installed and set up Image Builder and the
composer-cli
tool. - You have root-user access to your build host.
-
You have installed the
podman
tool.
Procedure
Start an
ostree
container image build by running the following command:$ BUILDID=$(sudo composer-cli compose start-ostree --ref "rhel/{op-system-version-major}/$(uname -m)/edge" <microshift_blueprint> edge-container | awk '/^Compose/ {print $2}') 1
- 1
- Replace <microshift_blueprint> with the name of your blueprint.
This command also returns the identification (ID) of the build for monitoring.
You can check the status of the build periodically by running the following command:
$ sudo composer-cli compose status
Example output of a running build
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 microshift_blueprint 0.0.1 edge-container
Example output of a completed build
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 microshift_blueprint 0.0.1 edge-container
NoteYou can use the
watch
command to monitor your build if you are familiar with how to start and stop it.Download the container image using the ID and get the image ready for use by running the following command:
$ sudo composer-cli compose image ${BUILDID}
Change the ownership of the downloaded container image to the current user by running the following command:
$ sudo chown $(whoami). ${BUILDID}-container.tar
Add read permissions for the current user to the image by running the following command:
$ sudo chmod a+r ${BUILDID}-container.tar
Bootstrap a server on port 8085 for the
ostree
container image to be consumed by the ISO build by completing the following steps:Get the
IMAGEID
variable result by running the following command:$ IMAGEID=$(cat < "./${BUILDID}-container.tar" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')
Use the
IMAGEID
variable result to execute the podman command step by running the following command:$ sudo podman run -d --name=minimal-microshift-server -p 8085:8080 ${IMAGEID}
This command also returns the ID of the container saved in the
IMAGEID
variable for monitoring.
Generate the installer blueprint file by running the following command:
cat > microshift-installer.toml <<EOF name = "microshift-installer" description = "" version = "0.0.0" modules = [] groups = [] packages = [] EOF
4.9. Add the blueprint to Image Builder and build the ISO
Add the blueprint to the Image Builder by running the following command:
$ sudo composer-cli blueprints push microshift-installer.toml
Start the
ostree
ISO build by running the following command:$ BUILDID=$(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref "rhel/9/$(uname -m)/edge" microshift-installer edge-installer | awk '{print $2}')
This command also returns the identification (ID) of the build for monitoring.
You can check the status of the build periodically by running the following command:
$ sudo composer-cli compose status
Example output for a running build
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer
Example output for a completed build
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer
4.10. Download the ISO and prepare it for use
Download the ISO using the ID by running the following command:
$ sudo composer-cli compose image ${BUILDID}
Change the ownership of the downloaded container image to the current user by running the following command:
$ sudo chown $(whoami). ${BUILDID}-installer.iso
Add read permissions for the current user to the image by running the following command:
$ sudo chmod a+r ${BUILDID}-installer.iso
4.11. Provisioning a machine for MicroShift
Provision a machine with your RHEL for Edge image by using the procedures from the RHEL for Edge documentation.
To use MicroShift, you must provision the system so that it meets the following requirements:
- The machine you are provisioning must meet the system requirements for installing MicroShift.
- The file system must have a logical volume manager (LVM) volume group (VG) with sufficient capacity for the persistent volumes (PVs) of your workload.
-
A pull secret from the Red Hat Hybrid Cloud Console must be present as
/etc/crio/openshift-pull-secret
and have root user-only read/write permissions. - The firewall must be configured with the required settings.
If you are using a Kickstart such as the RHEL for Edge Installer (ISO) image, you can update your Kickstart file to meet the provisioning requirements.
Prerequisites
You have created a RHEL for Edge Installer (ISO) image containing your RHEL for Edge commit with MicroShift.
- This requirement includes the steps of composing an RFE Container image, creating the RFE Installer blueprint, starting the RFE container, and composing the RFE Installer image.
Create a Kickstart file or use an existing one. In the Kickstart file, you must include:
- Detailed instructions about how to create a user.
- How to fetch and deploy the RHEL for Edge image.
For more information, read "Additional resources."
Procedure
In the main section of the Kickstart file, update the setup of the filesystem such that it contains an LVM volume group called
rhel
with at least 10GB system root. Leave free space for the LVMS CSI driver to use for storing the data for your workloads.Example kickstart snippet for configuring the filesystem
# Partition disk such that it contains an LVM volume group called `rhel` with a # 10GB+ system root but leaving free space for the LVMS CSI driver for storing data. # # For example, a 20GB disk would be partitioned in the following way: # # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT # sda 8:0 0 20G 0 disk # ├─sda1 8:1 0 200M 0 part /boot/efi # ├─sda1 8:1 0 800M 0 part /boot # └─sda2 8:2 0 19G 0 part # └─rhel-root 253:0 0 10G 0 lvm /sysroot # ostreesetup --nogpg --osname=rhel --remote=edge \ --url=file:///run/install/repo/ostree/repo --ref=rhel/<RHEL VERSION NUMBER>/x86_64/edge zerombr clearpart --all --initlabel part /boot/efi --fstype=efi --size=200 part /boot --fstype=xfs --asprimary --size=800 # Uncomment this line to add a SWAP partition of the recommended size #part swap --fstype=swap --recommended part pv.01 --grow volgroup rhel pv.01 logvol / --vgname=rhel --fstype=xfs --size=10000 --name=root # To add users, use a line such as the following user --name=<YOUR_USER_NAME> \ --password=<YOUR_HASHED_PASSWORD> \ --iscrypted --groups=<YOUR_USER_GROUPS>
In the
%post
section of the Kickstart file, add your pull secret and the mandatory firewall rules.Example Kickstart snippet for adding the pull secret and firewall rules
%post --log=/var/log/anaconda/post-install.log --erroronfail # Add the pull secret to CRI-O and set root user-only read/write permissions cat > /etc/crio/openshift-pull-secret << EOF YOUR_OPENSHIFT_PULL_SECRET_HERE EOF chmod 600 /etc/crio/openshift-pull-secret # Configure the firewall with the mandatory rules for MicroShift firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 %end
Install the
mkksiso
tool by running the following command:$ sudo yum install -y lorax
Update the Kickstart file in the ISO with your new Kickstart file by running the following command:
$ sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso
4.12. How to access the MicroShift cluster
Use the procedures in this section to access the MicroShift cluster by using the OpenShift CLI (oc
).
- You can access the cluster from either the same machine running the MicroShift service or from a remote location.
- You can use this access to observe and administrate workloads.
-
When using the following steps, choose the
kubeconfig
file that contains the host name or IP address you want to connect to and place it in the relevant directory.
4.12.1. Accessing the MicroShift cluster locally
Use the following procedure to access the MicroShift cluster locally by using a kubeconfig
file.
Prerequisites
-
You have installed the
oc
binary.
Procedure
Optional: to create a
~/.kube/
folder if your Red Hat Enterprise Linux (RHEL) machine does not have one, run the following command:$ mkdir -p ~/.kube/
Copy the generated local access
kubeconfig
file to the~/.kube/
directory by running the following command:$ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
Update the permissions on your
~/.kube/config
file by running the following command:$ chmod go-r ~/.kube/config
Verification
Verify that MicroShift is running by entering the following command:
$ oc get all -A
4.12.2. Opening the firewall for remote access to the MicroShift cluster
Use the following procedure to open the firewall so that a remote user can access the MicroShift cluster. This procedure must be completed before a workstation user can access the cluster remotely.
For this procedure, user@microshift
is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation.
Prerequisites
-
You have installed the
oc
binary. - Your account has cluster administration privileges.
Procedure
As
user@microshift
on the MicroShift host, open the firewall port for the Kubernetes API server (6443/tcp
) by running the following command:[user@microshift]$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
Verification
As
user@microshift
, verify that MicroShift is running by entering the following command:[user@microshift]$ oc get all -A
4.12.3. Accessing the MicroShift cluster remotely
Use the following procedure to access the MicroShift cluster from a remote location by using a kubeconfig
file.
The user@workstation
login is used to access the host machine remotely. The <user>
value in the procedure is the name of the user that user@workstation
logs in with to the MicroShift host.
Prerequisites
-
You have installed the
oc
binary. -
The
user@microshift
has opened the firewall from the local host.
Procedure
As
user@workstation
, create a~/.kube/
folder if your Red Hat Enterprise Linux (RHEL) machine does not have one by running the following command:[user@workstation]$ mkdir -p ~/.kube/
As
user@workstation
, set a variable for the hostname of your MicroShift host by running the following command:[user@workstation]$ MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
As
user@workstation
, copy the generatedkubeconfig
file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command:[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
NoteTo generate the
kubeconfig
files for this step, see Generating additional kubeconfig files for remote access.As
user@workstation
, update the permissions on your~/.kube/config
file by running the following command:$ chmod go-r ~/.kube/config
Verification
As
user@workstation
, verify that MicroShift is running by entering the following command:[user@workstation]$ oc get all -A
Additional resources