MicroShift is Technology Preview software only.
For more information about the support scope of Red Hat Technology Preview software, see Technology Preview Support Scope.Installing
Installing and configuring MicroShift clusters
Abstract
Chapter 1. Installing MicroShift from an RPM package Copy linkLink copied to clipboard!
You can install MicroShift from an RPM package on a machine with Red Hat Enterprise Linux (RHEL) 9.2.
MicroShift is Technology Preview only. This Technology Preview software is not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using MicroShift in production. Technology Preview provides early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Red Hat does not support an update path from the Technology Preview version to later versions of MicroShift. A new installation is necessary.
For more information about the support scope of Red Hat Technology Preview features, read Technology Preview Features Support Scope.
1.1. System requirements for installing MicroShift Copy linkLink copied to clipboard!
The following conditions must be met prior to installing MicroShift:
- RHEL 9.2
- 2 CPU cores
- 2 GB RAM for MicroShift or 3 GB RAM, required by RHEL for networked-based HTTPs or FTP installations
- 10 GB of storage
- You have an active MicroShift subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
- You have a subscription that includes MicroShift RPMs.
- You have a Logical Volume Manager (LVM) Volume Group (VG) with sufficient capacity for the Persistent Volumes (PVs) of your workload.
1.2. Before installing MicroShift from an RPM package Copy linkLink copied to clipboard!
MicroShift uses the logical volume manager storage (LVMS) Container Storage Interface (CSI) plugin for providing storage to persistent volumes (PVs). LVMS relies on the Linux logical volume manager (LVM) to dynamically manage the backing logical volumes (LVs) for PVs. For this reason, your machine must have an LVM volume group (VG) with unused space in which LVMS can create the LVs for your workload’s PVs.
To configure a volume group (VG) that allows LVMS to create the LVs for your workload’s PVs, lower the Desired Size of your root volume during the installation of RHEL. Lowering the size of your root volume allows unallocated space on the disk for additional LVs created by LVMS at runtime.
1.3. Preparing to install MicroShift from an RPM package Copy linkLink copied to clipboard!
Configure your RHEL machine to have a logical volume manager (LVM) volume group (VG) with sufficient capacity for the persistent volumes (PVs) of your workload.
Prerequisites
- The system requirements for installing MicroShift have been met.
- You have root user access to your machine.
- You have configured your LVM VG with the capacity needed for the PVs of your workload.
Procedure
- In the graphical installer under Installation Destination in the Storage Configuration subsection, select Custom → Done to open the dialog for configuring partitions and volumes. The Manual Partitioning window is displayed.
- Under New Red Hat Enterprise Linux 9.x Installation, select Click here to create them automatically.
- Select the root partition, /, reduce Desired Capacity so that the VG has sufficient capacity for your PVs, and then click Update Settings.
Complete your installation.
NoteFor more options on partition configuration, read the guide linked in the Additional information section for Configuring Manual Partitioning.
As a root user, verify the VG capacity available on your system by running the following command:
sudo vgs
$ sudo vgs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
VG #PV #LV #SN Attr VSize VFree rhel 1 2 0 wz--n- <127.00g 54.94g
VG #PV #LV #SN Attr VSize VFree rhel 1 2 0 wz--n- <127.00g 54.94g
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Installing MicroShift from an RPM package Copy linkLink copied to clipboard!
Use the following procedure to install MicroShift from an RPM package.
Prerequisites
- The system requirements for installing MicroShift have been met.
- You have completed the steps of preparing to install MicroShift from an RPM package.
Procedure
As a root user, enable the MicroShift repositories by running the following command:
sudo subscription-manager repos \ --enable rhocp-4.13-for-rhel-9-$(uname -m)-rpms \ --enable fast-datapath-for-rhel-9-$(uname -m)-rpms
$ sudo subscription-manager repos \ --enable rhocp-4.13-for-rhel-9-$(uname -m)-rpms \ --enable fast-datapath-for-rhel-9-$(uname -m)-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install MicroShift by running the following command:
sudo dnf install -y microshift
$ sudo dnf install -y microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Install greenboot for MicroShift by running the following command:
sudo dnf install -y microshift-greenboot
$ sudo dnf install -y microshift-greenboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Download your installation pull secret from the Red Hat Hybrid Cloud Console to a temporary folder, for example,
$HOME/openshift-pull-secret
. This pull secret allows you to authenticate with the container registries that serve the container images used by MicroShift. To copy the pull secret to the
/etc/crio
folder of your RHEL machine, run the following command:sudo cp $HOME/openshift-pull-secret /etc/crio/openshift-pull-secret
$ sudo cp $HOME/openshift-pull-secret /etc/crio/openshift-pull-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the root user the owner of the
/etc/crio/openshift-pull-secret
file by running the following command:sudo chown root:root /etc/crio/openshift-pull-secret
$ sudo chown root:root /etc/crio/openshift-pull-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the
/etc/crio/openshift-pull-secret
file readable and writeable by the root user only by running the following command:sudo chmod 600 /etc/crio/openshift-pull-secret
$ sudo chmod 600 /etc/crio/openshift-pull-secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your RHEL machine has a firewall enabled, you must configure a few mandatory firewall rules. For
firewalld
, run the following commands:sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
$ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo firewall-cmd --reload
$ sudo firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the Volume Group (VG) that you have prepared for MicroShift used the default name rhel
, no further configuration is necessary. If you have used a different name, or if you want to change more configuration settings, see the Configuring MicroShift section.
1.5. Starting the MicroShift service Copy linkLink copied to clipboard!
Use the following procedure to start the MicroShift service.
Prerequisites
- You have installed MicroShift from an RPM package.
Procedure
As a root user, start the MicroShift service by entering the following command:
sudo systemctl start microshift
$ sudo systemctl start microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To configure your RHEL machine to start MicroShift when your machine starts, enter the following command:
sudo systemctl enable microshift
$ sudo systemctl enable microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To disable MicroShift from automatically starting when your machine starts, enter the following command:
sudo systemctl disable microshift
$ sudo systemctl disable microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe first time that the MicroShift service starts, it downloads and initializes MicroShift’s container images. As a result, it can take several minutes for MicroShift to start the first time that the service is deployed. Boot time is reduced for subsequent starts of the MicroShift service.
1.6. Stopping the MicroShift service Copy linkLink copied to clipboard!
Use the following procedure to stop the MicroShift service.
Prerequisites
- The MicroShift service is running.
Procedure
Enter the following command to stop the MicroShift service:
sudo systemctl stop microshift
$ sudo systemctl stop microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Workloads deployed on MicroShift will continue running even after the MicroShift service has been stopped. Enter the following command to display running workloads:
sudo crictl ps -a
$ sudo crictl ps -a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following commands to stop the deployed workloads:
sudo systemctl stop kubepods.slice
$ sudo systemctl stop kubepods.slice
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7. How to access the MicroShift cluster Copy linkLink copied to clipboard!
Use the procedures in this section to access the MicroShift cluster, either from the same machine running the MicroShift service or remotely from a workstation. You can use this access to observe and administrate workloads. When using these steps, choose the kubeconfig
file that contains the host name or IP address you want to connect with and place it in the relevant directory. As listed in each procedure, you use the OpenShift Container Platform CLI tool (oc
) for cluster activities.
1.7.1. Accessing the MicroShift cluster locally Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift cluster locally by using a kubeconfig
file.
Prerequisites
-
You have installed the
oc
binary.
Procedure
Optional: to create a
~/.kube/
folder if your RHEL machine does not have one, run the following command:mkdir -p ~/.kube/
$ mkdir -p ~/.kube/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the generated local access
kubeconfig
file to the~/.kube/
directory by running the following command:sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
$ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the permissions on your
~/.kube/config
file by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that MicroShift is running by entering the following command:
oc get all -A
$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.2. Opening the firewall for remote access to the MicroShift cluster Copy linkLink copied to clipboard!
Use the following procedure to open the firewall so that a remote user can access the MicroShift cluster. This procedure must be completed before a workstation user can access the cluster remotely.
For this procedure, user@microshift
is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation.
Prerequisites
-
You have installed the
oc
binary. - Your account has cluster administration privileges.
Procedure
As
user@microshift
on the MicroShift host, open the firewall port for the Kubernetes API server (6443/tcp
) by running the following command:sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
[user@microshift]$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@microshift
, verify that MicroShift is running by entering the following command:oc get all -A
[user@microshift]$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.3. Accessing the MicroShift cluster remotely Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift cluster from a remote workstation by using a kubeconfig
file.
The user@workstation
login is used to access the host machine remotely. The <user>
value in the procedure is the name of the user that user@workstation
logs in with to the MicroShift host.
Prerequisites
-
You have installed the
oc
binary. -
The
@user@microshift
has opened the firewall from the local host.
Procedure
As
user@workstation
, create a~/.kube/
folder if your RHEL machine does not have one by running the following command:mkdir -p ~/.kube/
[user@workstation]$ mkdir -p ~/.kube/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, set a variable for the hostname of your MicroShift host by running the following command:MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
[user@workstation]$ MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, copy the generatedkubeconfig
file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command:ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, update the permissions on your~/.kube/config
file by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@workstation
, verify that MicroShift is running by entering the following command:oc get all -A
[user@workstation]$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Embedding MicroShift in a RHEL for Edge image Copy linkLink copied to clipboard!
You can embed MicroShift into a Red Hat Enterprise Linux (RHEL) for Edge 9.2 image. Use this guide to build a RHEL image containing MicroShift.
MicroShift is Technology Preview only. This Technology Preview software is not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using MicroShift in production. Technology Preview provides early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Red Hat does not support an update path from the Technology Preview version to later versions of MicroShift. A new installation is necessary.
For more information about the support scope of Red Hat Technology Preview features, read Technology Preview Features Support Scope.
2.1. Preparing for image building Copy linkLink copied to clipboard!
Read Composing, installing, and managing RHEL for Edge images.
MicroShift 4.13 deployments have only been tested with Red Hat Enterprise Linux (RHEL) for Edge 9.2. Other versions of RHEL are not supported.
To build an Red Hat Enterprise Linux (RHEL) for Edge 9.2 image for a given CPU architecture, you need a RHEL 9.2 build host of the same CPU architecture that meets the Image Builder system requirements.
Follow the instructions in Installing Image Builder to install Image Builder and the composer-cli
tool.
2.2. Adding MicroShift repositories to Image Builder Copy linkLink copied to clipboard!
Use the following procedure to add the MicroShift repositories to Image Builder on your build host.
Prerequisites
- Your build host meets the Image Builder system requirements.
-
You have installed and set up Image Builder and the
composer-cli
tool. - You have root-user access to your build host.
Procedure
Create an Image Builder configuration file for adding the
rhocp-4.13
RPM repository source required to pull MicroShift RPMs by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Image Builder configuration file for adding the
fast-datapath
RPM repository by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the sources to the Image Builder by running the following commands:
sudo composer-cli sources add rhocp-4.13.toml
$ sudo composer-cli sources add rhocp-4.13.toml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo composer-cli sources add fast-datapath.toml
$ sudo composer-cli sources add fast-datapath.toml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the sources were added properly by running the following command:
sudo composer-cli sources list
$ sudo composer-cli sources list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
appstream baseos fast-datapath rhocp-4.13
appstream baseos fast-datapath rhocp-4.13
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Adding the MicroShift service to a blueprint Copy linkLink copied to clipboard!
Adding the MicroShift RPM package to an Image Builder blueprint enables the build of a RHEL for Edge image with MicroShift embedded.
Procedure
Use the following example to create your blueprint:
Image Builder blueprint example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional
microshift-greenboot
RPM. For more information, read the "Greenboot health check" guide in the "Running Applications" section.
NoteThe wildcard
*
in the commands uses the latest MicroShift RPMs. If you need a specific version, substitute the wildcard for the version you want. For example, insert4.13.1
to download the MicroShift 4.13.1 RPMs.Add the blueprint to the Image Builder by running the following command:
sudo composer-cli blueprints push minimal-microshift.toml
$ sudo composer-cli blueprints push minimal-microshift.toml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the Image Builder configuration listing only MicroShift packages by running the following command:
sudo composer-cli blueprints depsolve minimal-microshift | grep microshift
$ sudo composer-cli blueprints depsolve minimal-microshift | grep microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify the Image Builder configuration listing all components to be installed by running the following command:
sudo composer-cli blueprints depsolve minimal-microshift
$ sudo composer-cli blueprints depsolve minimal-microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Creating the Red Hat Enterprise Linux (RHEL) for Edge image Copy linkLink copied to clipboard!
Use the following procedure to create the ISO. The RHEL for Edge Installer image pulls the commit from the running container and creates an installable boot ISO with a Kickstart file configured to use the embedded OSTree commit.
Prerequisites
- Your build host meets the Image Builder system requirements.
-
You have installed and set up Image Builder and the
composer-cli
tool. - You have root-user access to your build host.
-
You have the
podman
tool.
Procedure
Start an
ostree
container image build by running the following command:BUILDID=$(sudo composer-cli compose start-ostree --ref "rhel/9/$(uname -m)/edge" minimal-microshift edge-container | awk '{print $2}')
$ BUILDID=$(sudo composer-cli compose start-ostree --ref "rhel/9/$(uname -m)/edge" minimal-microshift edge-container | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command also returns the identification (ID) of the build for monitoring.
You can check the status of the build periodically by running the following command:
sudo composer-cli compose status
$ sudo composer-cli compose status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output of a running build
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 minimal-microshift 0.0.1 edge-container
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 minimal-microshift 0.0.1 edge-container
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output of a completed build
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 minimal-microshift 0.0.1 edge-container
ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 minimal-microshift 0.0.1 edge-container
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the
watch
command to monitor your build if you are familiar with how to start and stop it.Download the container image using the ID and get the image ready for use by running the following command:
sudo composer-cli compose image ${BUILDID}
$ sudo composer-cli compose image ${BUILDID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the ownership of the downloaded container image to the current user by running the following command:
sudo chown $(whoami). ${BUILDID}-container.tar
$ sudo chown $(whoami). ${BUILDID}-container.tar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add read permissions for the current user to the image by running the following command:
sudo chmod a+r ${BUILDID}-container.tar
$ sudo chmod a+r ${BUILDID}-container.tar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bootstrap a server on port 8085 for the
ostree
container image to be consumed by the ISO build by completing the following steps:Get the
IMAGEID
variable result by running the following command:IMAGEID=$(cat < "./${BUILDID}-container.tar" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')
$ IMAGEID=$(cat < "./${BUILDID}-container.tar" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
IMAGEID
variable result to execute the podman command step by running the following command:sudo podman run -d --name=minimal-microshift-server -p 8085:8080 ${IMAGEID}
$ sudo podman run -d --name=minimal-microshift-server -p 8085:8080 ${IMAGEID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command also returns the ID of the container saved in the
IMAGEID
variable for monitoring.
Generate the installer blueprint file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Add the blueprint to Image Builder and build the ISO Copy linkLink copied to clipboard!
Add the blueprint to the Image Builder by running the following command:
sudo composer-cli blueprints push microshift-installer.toml
$ sudo composer-cli blueprints push microshift-installer.toml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
ostree
ISO build by running the following command:BUILDID=$(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref "rhel/9/$(uname -m)/edge" microshift-installer edge-installer | awk '{print $2}')
$ BUILDID=$(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref "rhel/9/$(uname -m)/edge" microshift-installer edge-installer | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command also returns the identification (ID) of the build for monitoring.
You can check the status of the build periodically by running the following command:
sudo composer-cli compose status
$ sudo composer-cli compose status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for a running build
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for a completed build
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer
ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Download the ISO and prepare it for use Copy linkLink copied to clipboard!
Download the ISO using the ID by running the following command:
sudo composer-cli compose image ${BUILDID}
$ sudo composer-cli compose image ${BUILDID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the ownership of the downloaded container image to the current user by running the following command:
sudo chown $(whoami). ${BUILDID}-installer.iso
$ sudo chown $(whoami). ${BUILDID}-installer.iso
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add read permissions for the current user to the image by running the following command:
sudo chmod a+r ${BUILDID}-installer.iso
$ sudo chmod a+r ${BUILDID}-installer.iso
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Provisioning a machine for MicroShift Copy linkLink copied to clipboard!
Provision a machine with your RHEL for Edge image by using the procedures from the RHEL for Edge documentation.
To use MicroShift, you must provision the system so that it meets the following requirements:
- The machine you are provisioning must meet the system requirements for installing MicroShift.
- The file system must have a logical volume manager (LVM) volume group (VG) with sufficient capacity for the persistent volumes (PVs) of your workload.
-
A pull secret from the Red Hat Hybrid Cloud Console must be present as
/etc/crio/openshift-pull-secret
and have root user-only read/write permissions. - The firewall must be configured with MicroShift’s required firewall settings.
If you are using a Kickstart such as the RHEL for Edge Installer (ISO) image, you can update your Kickstart file to meet the provisioning requirements.
Prerequisites
You have created an RHEL for Edge Installer (ISO) image containing your RHEL for Edge commit with MicroShift.
- This requirement includes the steps of composing an RFE Container image, creating the RFE Installer blueprint, starting the RFE container, and composing the RFE Installer image.
Create a Kickstart file or use an existing one. In the Kickstart file, you must include:
- Detailed instructions about how to create a user.
- How to fetch and deploy the RHEL for Edge image.
For more information, read "Additional resources."
Procedure
In the main section of the Kickstart file, update the setup of the filesystem such that it contains an LVM volume group called
rhel
with at least 10GB system root. Leave free space for the LVMS CSI driver to use for storing the data for your workloads.Example kickstart snippet for configuring the filesystem
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
%post
section of the Kickstart file, add your pull secret and the mandatory firewall rules.Example Kickstart snippet for adding the pull secret and firewall rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
mkksiso
tool by running the following command:sudo yum install -y lorax
$ sudo yum install -y lorax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Kickstart file in the ISO with your new Kickstart file by running the following command:
sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso
$ sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. How to access the MicroShift cluster Copy linkLink copied to clipboard!
Use the procedures in this section to access the MicroShift cluster, either from the same machine running the MicroShift service or remotely from a workstation. You can use this access to observe and administrate workloads. When using these steps, choose the kubeconfig
file that contains the host name or IP address you want to connect with and place it in the relevant directory. As listed in each procedure, you use the OpenShift Container Platform CLI tool (oc
) for cluster activities.
2.8.1. Accessing the MicroShift cluster locally Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift cluster locally by using a kubeconfig
file.
Prerequisites
-
You have installed the
oc
binary.
Procedure
Optional: to create a
~/.kube/
folder if your RHEL machine does not have one, run the following command:mkdir -p ~/.kube/
$ mkdir -p ~/.kube/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the generated local access
kubeconfig
file to the~/.kube/
directory by running the following command:sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
$ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the permissions on your
~/.kube/config
file by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that MicroShift is running by entering the following command:
oc get all -A
$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.2. Opening the firewall for remote access to the MicroShift cluster Copy linkLink copied to clipboard!
Use the following procedure to open the firewall so that a remote user can access the MicroShift cluster. This procedure must be completed before a workstation user can access the cluster remotely.
For this procedure, user@microshift
is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation.
Prerequisites
-
You have installed the
oc
binary. - Your account has cluster administration privileges.
Procedure
As
user@microshift
on the MicroShift host, open the firewall port for the Kubernetes API server (6443/tcp
) by running the following command:sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
[user@microshift]$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@microshift
, verify that MicroShift is running by entering the following command:oc get all -A
[user@microshift]$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.3. Accessing the MicroShift cluster remotely Copy linkLink copied to clipboard!
Use the following procedure to access the MicroShift cluster from a remote workstation by using a kubeconfig
file.
The user@workstation
login is used to access the host machine remotely. The <user>
value in the procedure is the name of the user that user@workstation
logs in with to the MicroShift host.
Prerequisites
-
You have installed the
oc
binary. -
The
@user@microshift
has opened the firewall from the local host.
Procedure
As
user@workstation
, create a~/.kube/
folder if your RHEL machine does not have one by running the following command:mkdir -p ~/.kube/
[user@workstation]$ mkdir -p ~/.kube/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, set a variable for the hostname of your MicroShift host by running the following command:MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
[user@workstation]$ MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, copy the generatedkubeconfig
file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command:ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
user@workstation
, update the permissions on your~/.kube/config
file by running the following command:chmod go-r ~/.kube/config
$ chmod go-r ~/.kube/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As
user@workstation
, verify that MicroShift is running by entering the following command:oc get all -A
[user@workstation]$ oc get all -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. The greenboot health check Copy linkLink copied to clipboard!
Greenboot is the generic health check framework for the systemd
service on RPM-OSTree-based systems. The microshift-greenboot
RPM and greenboot-default-health-checks
are optional RPM packages you can install. Greenboot is used to assess system health and automate a rollback to the last healthy state in the event of software trouble.
This health check framework is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is either limited or non-existent. When health check scripts are installed and configured, health checks run every time the system starts.
Using greenboot can reduce your risk of being locked out of edge devices during updates and prevent a significant interruption of service if an update fails. When a failure is detected, the system boots into the last known working configuration using the rpm-ostree
rollback capability.
A MicroShift health check script is included in the microshift-greenboot
RPM. The greenboot-default-health-checks
RPM includes health check scripts verifying that DNS and ostree
services are accessible. You can also create your own health check scripts based on the workloads you are running. You can write one that verifies that an application has started, for example.
Rollback is not possible in the case of an update failure on a system not using OSTree. This is true even though health checks might run.
3.1. How greenboot uses directories to run scripts Copy linkLink copied to clipboard!
Health check scripts run from four /etc/greenboot
directories. These scripts run in alphabetical order. Keep this in mind when you configure the scripts for your workloads.
When the system starts, greenboot runs the scripts in the required.d
and wanted.d
directories. Depending on the outcome of those scripts, greenboot continues the startup or attempts a rollback as follows:
-
System as expected: When all of the scripts in the
required.d
directory are successful, greenboot runs any scripts present in the/etc/greenboot/green.d
directory. -
System trouble: If any of the scripts in the
required.d
directory fail, greenboot runs any prerollback scripts present in thered.d
directory, then restarts the system.
Greenboot redirects script and health check output to the system log. When you are logged in, a daily message provides the overall system health output.
3.1.1. Greenboot directories details Copy linkLink copied to clipboard!
Returning a nonzero exit code from any script means that script has failed. Greenboot restarts the system a few times to retry the scripts before attempting to roll back to the previous version.
/etc/greenboot/check/required.d
contains the health checks that must not fail.-
If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the
/etc/greenboot/greenboot.conf
file by setting theGREENBOOT_MAX_BOOTS
parameter to the desired number of retries. - After all retries fail, greenboot automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that manual intervention is required.
-
The
40_microshift_running_check.sh
health check script for MicroShift is installed into this directory.
-
If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the
/etc/greenboot/check/wanted.d
contains health scripts that are allowed to fail without causing the system to be rolled back.- If any of these scripts fail, greenboot logs the failure but does not initiate a rollback.
-
/etc/greenboot/green.d
contains scripts that run after greenboot has declared the start successful. -
/etc/greenboot/red.d
contains scripts that run after greenboot has declared the startup as failed, including the40_microshift_pre_rollback.sh
prerollback script. This script is executed right before a system rollback. The script performs MicroShift pod and OVN-Kubernetes cleanup to avoid potential conflicts after the system is rolled back to a previous version.
3.2. The MicroShift health script Copy linkLink copied to clipboard!
The 40_microshift_running_check.sh
health check script only performs validation of core MicroShift services. Install your customized workload validation scripts in the greenboot directories to ensure successful application operations after system updates. Scripts run in alphabetical order.
MicroShift health checks are listed in the following table:
Validation | Pass | Fail |
---|---|---|
Check that the script runs with | Next |
|
Check that the | Next |
|
Wait for the | Next |
|
Wait for Kubernetes API health endpoints to be working and receiving traffic | Next |
|
Wait for any pod to start | Next |
|
For each core namespace, wait for images to be pulled | Next |
|
For each core namespace, wait for pods to be ready | Next |
|
For each core namespace, check if pods are not restarting |
|
|
3.2.1. Validation wait period Copy linkLink copied to clipboard!
The wait period in each validation is five minutes by default. After the wait period, if the validation has not succeeded, it is declared a failure. This wait period is incrementally increased by the base wait period after each boot in the verification loop.
-
You can override the base-time wait period by setting the
MICROSHIFT_WAIT_TIMEOUT_SEC
environment variable in the/etc/greenboot/greenboot.conf
configuration file. For example, you can change the wait time to three minutes by resetting the value to 180 seconds, such asMICROSHIFT_WAIT_TIMEOUT_SEC=180
.
3.3. Enabling systemd journal service data persistency Copy linkLink copied to clipboard!
The default configuration of the systemd
journal service stores the data in the volatile /run/log/journal
directory. To persist system logs across system starts and restarts, you must enable log persistence and set limits on the maximal journal data size.
Procedure
Make the directory by running the following command:
sudo mkdir -p /etc/systemd/journald.conf.d
$ sudo mkdir -p /etc/systemd/journald.conf.d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the configuration file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the configuration file values for your size requirements.
3.4. Updates and third-party workloads Copy linkLink copied to clipboard!
Health checks are especially useful after an update. You can examine the output of greenboot health checks and determine whether the update was declared valid. This health check can help you determine if the system is working properly.
Health check scripts for updates are installed into the /etc/greenboot/check/required.d
directory and are automatically executed during each system start. Exiting scripts with a nonzero status means the system start is declared as failed.
Wait until after an update is declared valid before starting third-party workloads. If a rollback is performed after workloads start, you can lose data. Some third-party workloads create or update data on a device before an update is complete. Upon rollback, the file system reverts to its state before the update.
3.5. Checking the results of an update Copy linkLink copied to clipboard!
After a successful start, greenboot sets the variable boot_success=
to 1
in GRUB. You can view the overall status of system health checks after an update in the system log by using the following procedure.
Procedure
To access the overall status of system health checks, run the following command:
sudo grub2-editenv - list | grep ^boot_success
$ sudo grub2-editenv - list | grep ^boot_success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output for a successful system start
boot_success=1
boot_success=1
3.6. Accessing health check output in the system log Copy linkLink copied to clipboard!
You can manually access the output of health checks in the system log by using the following procedure.
Procedure
To access the results of a health check, run the following command:
sudo journalctl -o cat -u greenboot-healthcheck.service
$ sudo journalctl -o cat -u greenboot-healthcheck.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output of a failed health check
3.7. Accessing prerollback health check output in the system log Copy linkLink copied to clipboard!
You can access the output of health check scripts in the system log. For example, check the results of a prerollback script using the following procedure.
Procedure
To access the results of a prerollback script, run the following command:
sudo journalctl -o cat -u redboot-task-runner.service
$ sudo journalctl -o cat -u redboot-task-runner.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output of a prerollback script
3.8. Checking updates with a health script Copy linkLink copied to clipboard!
Access the output of health check scripts in the system log after an update by using the following procedure.
Procedure
To access the result of update checks, run the following command:
sudo grub2-editenv - list | grep ^boot_success
$ sudo grub2-editenv - list | grep ^boot_success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output for a successful update
boot_success=1
boot_success=1
If your command returns boot_success=0
, either the greenboot health check is still running, or the update is a failure.