Getting ready to install MicroShift
Plan for your MicroShift installation and learn about important configurations
Abstract
Chapter 1. Getting ready to install MicroShift Copy linkLink copied to clipboard!
To use Red Hat Device Edge to compute at the edge, plan your Red Hat Enterprise Linux (RHEL) installation type and your MicroShift configuration.
1.1. System requirements for installing MicroShift Copy linkLink copied to clipboard!
These requirements are the minimum system requirements for MicroShift and Red Hat Enterprise Linux (RHEL). Add the system requirements for the workload you plan to run.
For example, if an IoT gateway solution requires 4 GB of RAM, your system needs to have at least 2 GB for RHEL and MicroShift, plus 4 GB for the workloads. Thus, this example deployment requires 6 GB of RAM in total.
Allow for extra capacity for future needs if you are deploying physical devices in remote locations. If you are uncertain of the RAM required, use the maximum RAM capacity that the device can support.
The following conditions must be met before installing MicroShift:
A compatible version of RHEL. For more information, see the following link:
Hardware or hypervisors that are certified for your RHEL version are strongly recommended. For more information, see the following links:
- Red Hat certified hardware
- Certified hypervisors
For information about the support policy for non-certified hardware or hypervisors, see the following link:
- AArch64 or x86_64 system architecture.
- 2 CPU cores.
- 2 GB RAM. Installing from the network (UEFI HTTPs or PXE boot) requires 3 GB RAM for RHEL.
- 10 GB of storage.
- You have an active MicroShift subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
- If your workload requires Persistent Volumes (PVs), you have a Logical Volume Manager (LVM) Volume Group (VG) with enough free capacity for the workloads.
You configure secure access to the system to be able to manage it. For more information, see the following link:
1.2. Compatibility table Copy linkLink copied to clipboard!
You must pair a supported version of Red Hat Enterprise Linux (RHEL) with the MicroShift version you are using as described in the following compatibility table.
Red Hat Device Edge release compatibility matrix
Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table:
RHEL Version(s) | MicroShift Version | Supported MicroShift Version → Version Updates |
---|---|---|
9.4 | 4.17 | 4.17.1 → 4.17.z |
9.4 | 4.16 | 4.16.0 → 4.16.z, 4.16 → 4.17 |
9.2, 9.3 | 4.15 | 4.15.0 → 4.15.z, 4.15 → 4.16 on RHEL 9.4 |
9.2, 9.3 | 4.14 | 4.14.0 → 4.14.z, 4.14 → 4.15 or 4.14 → 4.16 on RHEL 9.4 |
1.3. MicroShift installation tools Copy linkLink copied to clipboard!
To use MicroShift, you must already have or plan to install a Red Hat Enterprise Linux (RHEL) type, such as on bare metal, or as a virtual machine (VM) that you provision. Although each use case has different details, each installation of Red Hat Device Edge uses RHEL tools and the OpenShift CLI (oc
).
You can use RPMs to install MicroShift on an existing RHEL machine. You do not need other tools unless you are also installing an image-based RHEL system or VM at the same time.
1.4. RHEL installation types Copy linkLink copied to clipboard!
Choose the best Red Hat Enterprise Linux (RHEL) installation type based on where you want to run your cluster and what your applications need to do. For the best results, apply the following principles:
- For every installation target, you must configure both the operating system and MicroShift.
- Consider your application storage needs, networking for cluster or application access, and your authentication and security requirements.
- Understand the differences between the RHEL installation types, including the support scope of each, and the tools used.
1.4.1. Using RPMs, or package-based installation Copy linkLink copied to clipboard!
This simple installation type uses a basic command to install MicroShift on an existing RHEL machine. Basic CLI tools are required for this installation type.
1.4.2. RHEL image-based installations Copy linkLink copied to clipboard!
Image-based installation types involve creating an rpm-ostree
-based, immutable version of RHEL that is optimized for edge deployment.
- RHEL for Edge can be deployed to the edge in production environments. You can use this installation type where network connections are present, restricted, or completely offline, depending on the local environment.
Image mode for RHEL is based on OCI container images and bootable containers. See the following link for an introduction to bootc technology:
When choosing an image-based installation, consider whether the installation target is intended to be in an offline or networked state, where you plan to build system images, and how you plan to load your Red Hat Device Edge. Use the following scenarios as general guidance:
- If you build either a fully self-contained RHEL for Edge or an image mode for RHEL ISO outside a disconnected environment, and then install the ISO locally on your edge devices, you likely do not need an RPM repository or a mirror registry.
- If you build an ISO outside a disconnected environment that does not include the container images, but consists of only the RPMs, you need a mirror registry inside your disconnected environment. You use your mirror registry to pull container images.
If you build images inside a disconnected environment, or use package-based installations, you need both a mirror registry and a local RPM mirror repository. You can use either the RHEL reposync utility or Red Hat Satellite for advanced use cases. See the following links for more information:
1.5. RHEL installation tools and concepts Copy linkLink copied to clipboard!
Familiarize yourself with the following RHEL tools and concepts:
A Kickstart file, which contains the configuration and instructions used during the installation of your specific operating system. For more information, see the following link:
RHEL image builder is a tool for creating deployment-ready customized system images. RHEL image builder uses a blueprint that you create to make the ISO. RHEL image builder is best installed on a RHEL VM and is built with the
composer-cli
tool. To set up these tools and review the workflow, see the following RHEL documentation links:A blueprint file directs RHEL image builder to the items to include in the ISO. An image blueprint provides a persistent definition of image customizations. You can create multiple builds from a single blueprint. You can also edit an existing blueprint to build a new ISO as requirements change. See the following link for more information:
An ISO, which is the bootable operating system on which MicroShift runs. See the following links for more information:
1.6. Red Hat Device Edge installation steps Copy linkLink copied to clipboard!
For most installation types, you must also take the following steps:
Download the pull secret from the Red Hat Hybrid Cloud Console using the following link:
Be ready to configure MicroShift by adding parameters and values to the MicroShift YAML configuration file. For more information, see the following link:
- Decide whether you need to configure storage for the application and tasks you are using in your MicroShift cluster, or disable the MicroShift storage plug-in completely.
For more information about creating volume groups and persistent volumes on RHEL, see the following link:
Configure networking settings according to the access needs you plan for your MicroShift cluster and applications. Consider whether you want to use single or dual-stack networks, configure a firewall, or configure routes.
NoteYou can use the Red Hat Enterprise Linux for Real Time (real-time kernel) where predictable latency is critical. Workload partitioning is also required for low-latency applications. For more information about low latency and the real-time kernel, see the following link:
1.7. Encrypt etcd data Copy linkLink copied to clipboard!
Kubernetes objects are stored in an etcd database and might contain sensitive data. The etcd data is not encrypted by default. You can encrypt the disk that contains the etcd database by using the Linux Unified Key Setup-on-disk-format (LUKS) management tool for block device encryption.
Chapter 2. Using FIPS mode with MicroShift Copy linkLink copied to clipboard!
You can use FIPS mode with RPM-based installations of MicroShift on Red Hat Enterprise Linux (RHEL) 9.
- To enable FIPS mode in MicroShift containers, the worker machine kernel must be enabled to run in FIPS mode before the machine starts.
- Using FIPS with Red Hat Enterprise Linux for Edge (RHEL for Edge) images is not supported.
- Using FIPS with image mode for RHEL is not supported.
2.1. FIPS mode with RHEL RPM-based installations Copy linkLink copied to clipboard!
Using FIPS with MicroShift requires enabling the cryptographic module self-checks in your Red Hat Enterprise Linux (RHEL) installation. After the host operating system has been configured to start with the FIPS modules, MicroShift containers are automatically enabled to run in FIPS mode.
- When RHEL is started in FIPS mode, MicroShift core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 validation on only the x86_64 architectures.
You must enable FIPS mode when you install RHEL 9 on the machines that you plan to use as worker machines.
ImportantBecause FIPS must be enabled before the operating system that your cluster uses starts for the first time, you cannot enable FIPS after you deploy a cluster.
- MicroShift uses a FIPS-compatible Golang compiler.
- FIPS is supported in the CRI-O container runtime.
2.1.1. Limitations Copy linkLink copied to clipboard!
- TLS implementation FIPS support is not complete.
- The FIPS implementation does not offer a single function that both computes hash functions and validates the keys that are based on that hash. This limitation continues to be evaluated for improvement in future MicroShift releases.
Chapter 3. The greenboot health check framework Copy linkLink copied to clipboard!
Greenboot is the generic health check framework for the systemd
service on rpm-ostree
systems such as Red Hat Enterprise Linux for Edge (RHEL for Edge). This framework is included in MicroShift installations with the microshift-greenboot
and greenboot-default-health-checks
RPM packages.
Greenboot health checks run at various times to assess system health and automate a rollback on rpm-ostree
systems to the last healthy state in cases of software trouble, for example:
- Default health check scripts run each time the system starts.
- In addition the to the default health checks, you can write, install, and configure application health check scripts to also run every time the system starts.
- Greenboot can reduce your risk of being locked out of edge devices during updates and prevent a significant interruption of service if an update fails.
-
When a failure is detected, the system boots into the last known working configuration using the
rpm-ostree
rollback capability. This feature is especially useful automation for edge devices where direct serviceability is either limited or non-existent.
A MicroShift application health check script is included in the microshift-greenboot
RPM. The greenboot-default-health-checks
RPM includes health check scripts verifying that DNS and ostree
services are accessible. You can create your own health check scripts for the workloads you are running. You can write one that verifies that an application has started, for example.
3.1. How greenboot uses directories to run scripts Copy linkLink copied to clipboard!
Health check scripts run from four /etc/greenboot
directories. These scripts run in alphabetical order. Keep this in mind when you configure the scripts for your workloads.
When the system starts, greenboot runs the scripts in the required.d
and wanted.d
directories. Depending on the outcome of those scripts, greenboot continues the startup or attempts a rollback as follows:
-
System as expected: When all of the scripts in the
required.d
directory are successfully run, greenboot runs any scripts present in the/etc/greenboot/green.d
directory. -
System trouble: If any of the scripts in the
required.d
directory fail, greenboot runs any prerollback scripts present in thered.d
directory, then restarts the system.
Greenboot redirects script and health check output to the system log. When you are logged in, a daily message provides the overall system health output.
3.1.1. Greenboot directories details Copy linkLink copied to clipboard!
Returning a nonzero exit code from any script means that script has failed. Greenboot restarts the system a few times to retry the scripts before attempting to roll back to the previous version.
/etc/greenboot/check/required.d
contains the health checks that must not fail.-
If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the
/etc/greenboot/greenboot.conf
file by setting theGREENBOOT_MAX_BOOTS
parameter to the desired number of retries. - After all retries fail, greenboot automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that manual intervention is required.
-
The
40_microshift_running_check.sh
health check script for MicroShift is installed into this directory.
-
If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the
/etc/greenboot/check/wanted.d
contains health scripts that are allowed to fail without causing the system to be rolled back.- If any of these scripts fail, greenboot logs the failure but does not initiate a rollback.
-
/etc/greenboot/green.d
contains scripts that run after greenboot has declared the start successful. -
/etc/greenboot/red.d
contains scripts that run after greenboot has declared the startup as failed, including the40_microshift_pre_rollback.sh
prerollback script. This script is executed right before a system rollback. The script performs MicroShift pod and OVN-Kubernetes cleanup to avoid potential conflicts after the system is rolled back to a previous version.
If you customize the values of any environment variable in the /etc/greenboot/greenboot.conf
file, these changes can be lost when the greenboot RPM package is updated or downgraded.
-
To retain customizations when building system images with MicroShift, add the
greenboot.conf
file to a blueprint. -
To retain customizations when using an RPM installation, apply changes to the
greenboot.conf
file after you install MicroShift and greenboot RPMs.
3.2. The MicroShift health check script Copy linkLink copied to clipboard!
The 40_microshift_running_check.sh
health check script only performs validation of core MicroShift services. Install your customized workload health check scripts in the greenboot directories to ensure successful application operations after system updates. Scripts run in alphabetical order.
MicroShift health checks are listed in the following table:
Validation | Pass | Fail |
---|---|---|
Check that the script runs with | Next |
|
Check that the | Next |
|
Wait for the | Next |
|
Wait for Kubernetes API health endpoints to be working and receiving traffic | Next |
|
Wait for any pod to start | Next |
|
For each core namespace, wait for images to be pulled | Next |
|
For each core namespace, wait for pods to be ready | Next |
|
For each core namespace, check if pods are not restarting |
|
|
3.2.1. Validation wait period Copy linkLink copied to clipboard!
The wait period in each validation is five minutes by default. After the wait period, if the validation has not succeeded, it is declared a failure. This wait period is incrementally increased by the base wait period after each boot in the verification loop.
-
You can override the base-time wait period by setting the
MICROSHIFT_WAIT_TIMEOUT_SEC
environment variable in the/etc/greenboot/greenboot.conf
configuration file. For example, you can change the wait time to three minutes by resetting the value to 180 seconds, such asMICROSHIFT_WAIT_TIMEOUT_SEC=180
.
3.3. Enabling systemd journal service data persistency Copy linkLink copied to clipboard!
The default configuration of the systemd
journal service stores the data in the volatile /run/log/journal
directory. To view system logs across system starts and restarts, you must enable log persistence and set limits on the maximal journal data size.
Procedure
Make the directory by running the following command:
sudo mkdir -p /etc/systemd/journald.conf.d
$ sudo mkdir -p /etc/systemd/journald.conf.d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the configuration file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the configuration file values for your size requirements.
3.4. Updates and third-party workloads Copy linkLink copied to clipboard!
Health checks are especially useful after an update. You can examine the output of greenboot health checks and determine whether the update was declared valid. This health check can help you determine if the system is working properly.
Health check scripts for updates are installed into the /etc/greenboot/check/required.d
directory and are automatically executed during each system start. Exiting scripts with a nonzero status means the system start is declared as failed.
Wait until after an update is declared valid before starting third-party workloads. If a rollback is performed after workloads start, you can lose data. Some third-party workloads create or update data on a device before an update is complete. Upon rollback, the file system reverts to its state before the update.
3.5. Checking the results of an update Copy linkLink copied to clipboard!
After a successful start, greenboot sets the variable boot_success=
to 1
in GRUB. You can view the overall status of system health checks after an update in the system log by using the following procedure.
Procedure
To access the overall status of system health checks, run the following command:
sudo grub2-editenv - list | grep ^boot_success
$ sudo grub2-editenv - list | grep ^boot_success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output for a successful system start
boot_success=1
boot_success=1
3.6. Accessing health check output in the system log Copy linkLink copied to clipboard!
You can manually access the output of health checks in the system log by using the following procedure.
Procedure
To access the results of a health check, run the following command:
sudo journalctl -o cat -u greenboot-healthcheck.service
$ sudo journalctl -o cat -u greenboot-healthcheck.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output of a failed health check
3.7. Accessing prerollback health check output in the system log Copy linkLink copied to clipboard!
You can access the output of health check scripts in the system log. For example, check the results of a prerollback script using the following procedure.
Procedure
To access the results of a prerollback script, run the following command:
sudo journalctl -o cat -u redboot-task-runner.service
$ sudo journalctl -o cat -u redboot-task-runner.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output of a prerollback script
3.8. Checking updates with a health check script Copy linkLink copied to clipboard!
Access the output of greenboot health check scripts in the system log after an update by using the following procedure.
Procedure
To access the result of update checks, run the following command:
sudo grub2-editenv - list | grep ^boot_success
$ sudo grub2-editenv - list | grep ^boot_success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example output for a successful update
boot_success=1
boot_success=1
If your command returns boot_success=0
, either the greenboot health check is still running, or the update is a failure.
Chapter 4. Mirroring container images for disconnected installations Copy linkLink copied to clipboard!
You can use a custom container registry when you deploy MicroShift in a disconnected network. Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of container images in a private registry.
4.1. Mirror container images into an existing registry Copy linkLink copied to clipboard!
Using a custom air-gapped container registry, or mirror, is necessary with certain user environments and workload requirements. Mirroring allows for the transfer of container images and updates to air-gapped environments where they can be installed on a MicroShift instance.
To create an air-gapped mirror registry for MicroShift containers, you must complete the following steps:
- Get the container image list to be mirrored.
- Configure the mirroring prerequisites.
- Download images on a host with internet access.
- Copy the downloaded image directory to an air-gapped site.
- Upload images to a mirror registry in an air-gapped site.
- Configure your MicroShift hosts to use the mirror registry.
4.2. Getting the mirror registry container image list Copy linkLink copied to clipboard!
To use a mirror registry, you must know which container image references are used by a specific version of MicroShift. These references are provided in the release-<arch>.json
files that are part of the microshift-release-info
RPM package.
To mirror the Operator Lifecycle Manager (OLM) in disconnected environments, add the references provided in the release-olm-$ARCH.json
that is included in the microshift-olm
RPM and follow the same procedure. Use oc-mirror
for mirroring Operator catalogs and Operators.
Prerequisites
- You have installed jq.
Procedure
Access the list of container image references by using one of the following methods:
If the package is installed on the MicroShift host, get the location of the files by running the following command:
rpm -ql microshift-release-info
$ rpm -ql microshift-release-info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
/usr/share/microshift/release/release-x86_64.json
/usr/share/microshift/release/release-x86_64.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the package is not installed on a MicroShift host, download and unpack the RPM package without installing it by running the following command:
rpm2cpio microshift-release-info*.noarch.rpm | cpio -idmv
$ rpm2cpio microshift-release-info*.noarch.rpm | cpio -idmv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
/usr/share/microshift/release/release-x86_64.json
/usr/share/microshift/release/release-x86_64.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Extract the list of container images into the
microshift-container-refs.txt
file by running the following commands:RELEASE_FILE=/usr/share/microshift/release/release-$(uname -m).json
$ RELEASE_FILE=/usr/share/microshift/release/release-$(uname -m).json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow jq -r '.images | .[]' ${RELEASE_FILE} > microshift-container-refs.txt
$ jq -r '.images | .[]' ${RELEASE_FILE} > microshift-container-refs.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the microshift-container-refs.txt
file is created with the MicroShift container image list, you can append the file with other user-specific image references before running the mirroring procedure.
4.3. Configuring mirroring prerequisites Copy linkLink copied to clipboard!
You must create a container image registry credentials file that allows the mirroring of images from your internet-connected mirror host to your air-gapped mirror. Follow the instructions in the "Configuring credentials that allow images to be mirrored" link provided in the "Additional resources" section. These instructions guide you to create a ~/.pull-secret-mirror.json
file on the mirror registry host that includes the user credentials for accessing the mirror.
4.3.1. Example mirror registry pull secret entry Copy linkLink copied to clipboard!
For example, the following section is added to the pull secret file for the microshift_quay:8443
mirror registry using microshift:microshift
as username and password.
Example mirror registry section for pull secret file
"<microshift_quay:8443>": { "auth": "<microshift_auth>", "email": "<microshift_quay@example.com>" },
"<microshift_quay:8443>": {
"auth": "<microshift_auth>",
"email": "<microshift_quay@example.com>"
},
4.4. Downloading container images Copy linkLink copied to clipboard!
After you have located the container list and completed the mirroring prerequisites, download the container images to a host with internet access.
Prerequisites
- You are logged into a host with access to the internet.
-
The
.pull-secret-mirror.json
file andmicroshift-containers
directory contents are available locally.
Procedure
Install the
skopeo
tool used for copying the container images by running the following command:sudo dnf install -y skopeo
$ sudo dnf install -y skopeo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variable that points to the pull secret file:
PULL_SECRET_FILE=~/.pull-secret-mirror.json
$ PULL_SECRET_FILE=~/.pull-secret-mirror.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variable that points to the list of container images:
IMAGE_LIST_FILE=~/microshift-container-refs.txt
$ IMAGE_LIST_FILE=~/microshift-container-refs.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variable that points to the destination directory for storing the downloaded data:
IMAGE_LOCAL_DIR=~/microshift-containers
$ IMAGE_LOCAL_DIR=~/microshift-containers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following script to download the container images to the
${IMAGE_LOCAL_DIR}
directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Uploading container images to a mirror registry Copy linkLink copied to clipboard!
To use your container images at an air-gapped site, upload them to the mirror registry using the following procedure.
Prerequisites
-
You are logged into a host with access to
microshift-quay
. -
The
.pull-secret-mirror.json
file is available locally. -
The
microshift-containers
directory contents are available locally.
Procedure
Install the
skopeo
tool used for copying the container images by running the following command:sudo dnf install -y skopeo
$ sudo dnf install -y skopeo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variables pointing to the pull secret file:
IMAGE_PULL_FILE=~/.pull-secret-mirror.json
$ IMAGE_PULL_FILE=~/.pull-secret-mirror.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variables pointing to the local container image directory:
IMAGE_LOCAL_DIR=~/microshift-containers
$ IMAGE_LOCAL_DIR=~/microshift-containers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variables pointing to the mirror registry URL for uploading the container images:
TARGET_REGISTRY=<registry_host>:<port>
$ TARGET_REGISTRY=<registry_host>:<port>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<registry_host>:<port>
with the host name and port of your mirror registry server.
Run the following script to upload the container images to the
${TARGET_REGISTRY}
mirror registry:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Configuring hosts for mirror registry access Copy linkLink copied to clipboard!
To configure a MicroShift host to use a mirror registry, you must give the MicroShift host access to the registry by creating a configuration file that maps the Red Hat registry host names to the mirror.
Prerequisites
- Your mirror host has access to the internet.
- The mirror host can access the mirror registry.
- You configured the mirror registry for use in your restricted network.
- You downloaded the pull secret and modified it to include authentication to your mirror repository.
Procedure
- Log into your MicroShift host.
Enable the SSL certificate trust on any host accessing the mirror registry by completing the following steps:
-
Copy the
rootCA.pem
file from the mirror registry, for example,<registry_path>/quay-rootCA
, to the MicroShift host at the/etc/pki/ca-trust/source/anchors
directory. Enable the certificate in the system-wide trust store configuration by running the following command:
sudo update-ca-trust
$ sudo update-ca-trust
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Copy the
Create the
/etc/containers/registries.conf.d/999-microshift-mirror.conf
configuration file that maps the Red Hat registry host names to the mirror registry:Example mirror configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<registry_host>:<port>
with the host name and port of your mirror registry server, for example,<microshift-quay:8443>
.
Enable the MicroShift service by running the following command:
sudo systemctl enable microshift
$ sudo systemctl enable microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the host by running the following command:
sudo reboot
$ sudo reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow