Running applications
Running applications in MicroShift
Abstract
Chapter 1. Using Kustomize manifests to deploy applications Copy linkLink copied to clipboard!
You can use the kustomize configuration management tool with application manifests to deploy applications. Read through the following procedures for an example of how Kustomize works in MicroShift.
1.1. How Kustomize works with manifests to deploy applications Copy linkLink copied to clipboard!
The kustomize configuration management tool is integrated with MicroShift. You can use Kustomize and the OpenShift CLI (oc) together to apply customizations to your application manifests and deploy those applications to a MicroShift node.
-
A
kustomization.yamlfile is a specification of resources plus customizations. -
Kustomize uses a
kustomization.yamlfile to load a resource, such as an application, then applies any changes you want to that application manifest and produces a copy of the manifest with the changes overlaid. - Using a manifest copy with an overlay keeps the original configuration file for your application intact, while enabling you to deploy iterations and customizations of your applications efficiently.
-
You can then deploy the application in your MicroShift node with an
occommand.
At each system start, MicroShift deletes the manifests found in the delete subdirectories and then applies the manifest files found in the manifest directories to the node.
1.1.1. How MicroShift uses manifests Copy linkLink copied to clipboard!
At every start, MicroShift searches the following manifest directories for Kustomize manifest files:
-
/etc/microshift/manifests -
/etc/microshift/manifests.d/* -
/usr/lib/microshift/ -
/usr/lib/microshift/manifests.d/*
MicroShift automatically runs the equivalent of the kubectl apply -k command to apply the manifests to the node if any of the following file types exists in the searched directories:
-
kustomization.yaml -
kustomization.yml -
Kustomization
This automatic loading from multiple directories means you can manage MicroShift workloads with the flexibility of having different workloads run independently of each other.
| Location | Intent |
|---|---|
|
| Read-write location for configuration management systems or development. |
|
| Read-write location for configuration management systems or development. |
|
| Read-only location for embedding configuration manifests on OSTree-based systems. |
|
| Read-only location for embedding configuration manifests on OSTree-based systems. |
1.2. Override the list of manifest paths Copy linkLink copied to clipboard!
You can override the list of default manifest paths by using a new single path, or by using a new glob pattern for multiple files. Use the following procedure to customize your manifest paths.
Procedure
Override the list of default paths by inserting your own values and running one of the following commands:
-
Set
manifests.kustomizePathsto<"/opt/alternate/path">in the configuration file for a single path. Set
kustomizePathsto,"/opt/alternative/path.d/*".in the configuration file for a glob pattern.manifests: kustomizePaths: - <location>manifests: kustomizePaths: - <location>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set each location entry to an exact path by using
"/opt/alternate/path"or a glob pattern by using"/opt/alternative/path.d/*".
-
Set
To disable loading manifests, set the configuration option to an empty list.
manifests: kustomizePaths: []manifests: kustomizePaths: []Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe configuration file overrides the defaults entirely. If the
kustomizePathsvalue is set, only the values in the configuration file are used. Setting the value to an empty list disables manifest loading.
1.3. Using manifests example Copy linkLink copied to clipboard!
This example demonstrates automatic deployment of a BusyBox container by using kustomize manifests in the /etc/microshift/manifests directory.
Procedure
Create the BusyBox manifest files by running the following commands:
Define the directory location:
MANIFEST_DIR=/etc/microshift/manifests
$ MANIFEST_DIR=/etc/microshift/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the directory:
sudo mkdir -p ${MANIFEST_DIR}$ sudo mkdir -p ${MANIFEST_DIR}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the YAML file in the directory:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next, create the
kustomizemanifest files by running the following commands:Place the YAML file in the directory:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart MicroShift to apply the manifests by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifests and start the
busyboxpod by running the following command:oc get pods -n busybox
$ oc get pods -n busyboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Deleting or updating Kustomize manifest resources Copy linkLink copied to clipboard!
MicroShift supports the deletion of manifest resources in the following situations:
- Manifest removal: Manifests can be removed when you need to completely remove a resource from the node.
- Manifest upgrade: During an application upgrade, some resources might need to be removed while others are retained to preserve data.
When creating new manifests, you can use manifest resource deletion to remove or update old objects, ensuring there are no conflicts or issues.
Manifest files placed in the delete subdirectories are not automatically removed and require manual deletion. Only the resources listed in the manifest files placed in the delete subdirectories are deleted.
2.1. How manifest deletion works Copy linkLink copied to clipboard!
By default, MicroShift searches for deletion manifests in the delete subdirectories within the manifests path. When a user places a manifest in these subdirectories, MicroShift removes the manifests when the system is started. Read through the following to understand how manifests deletion works in MicroShift.
Each time the system starts, before applying the manifests, MicroShift scans the following
deletesubdirectories within the configured manifests directory to identify the manifests that need to be deleted:-
/usr/lib/microshift/manifests/delete -
/usr/lib/microshift/manifests.d/delete/* -
/etc/microshift/manifests/delete -
/etc/microshift/manifests.d/delete/*
-
-
MicroShift deletes the resources defined in the manifests found in the
deletedirectories by running the equivalent of thekubectl delete --ignore-not-found -kcommand.
2.2. Use cases for manifest resource deletion Copy linkLink copied to clipboard!
The following sections explain the use case in which the manifest resource deletion is used.
2.2.1. Removing manifests for RPM systems Copy linkLink copied to clipboard!
Use the following procedure in the data removal scenario for RPM systems to completely delete the resource defined in the manifests.
Procedure
-
Identify the manifest that needs to be placed in the
deletesubdirectories. Create the
deletesubdirectory in which the manifest will be placed by running the following command:sudo mkdir -p <path_of_delete_directory>
$ sudo mkdir -p <path_of_delete_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<path_of_delete_directory>with one of the following valid directory paths:/etc/microshift/manifests.d/delete,/etc/microshift/manifests/delete/,/usr/lib/microshift/manifests.d/delete, or/usr/lib/microshift/manifests/delete.
Move the manifest file into one of the
deletesubdirectories under the configured manifests directory by running the following command:[sudo] mv <path_of_manifests> <path_of_delete_directory>
$ [sudo] mv <path_of_manifests> <path_of_delete_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<path_of_manifests>:: Specifies the path of the manifest to be deleted, for example/etc/microshift/manifests.d/010-SOME-MANIFEST.<path_of_delete_directory>:: Specifies one of the following valid directory paths:/etc/microshift/manifests.d/delete,/etc/microshift/manifests/delete,/usr/lib/microshift/manifests.d/deleteor/usr/lib/microshift/manifests/delete.Restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
MicroShift detects and removes the resource after the manifest file is placed in the
deletesubdirectories.
2.2.2. Removing manifests for OSTree systems Copy linkLink copied to clipboard!
Use the following procedure to completely delete the resource defined in the manifests.
For OSTree installation, the delete subdirectories are read-only.
Procedure
-
Identify the manifest that needs to be placed in the
deletesubdirectories. - Package the manifest into an RPM. See Building the RPM package for the application for the procedure to package the manifest into an RPM.
- Add the packaged RPM to the blueprint file to install it into correct location. See Adding application RPMs to a blueprint for the procedure to add an RPM to a blueprint.
2.2.3. Upgrading manifests for RPM systems Copy linkLink copied to clipboard!
Use the following procedure to remove some resources while retaining others to preserve data.
Procedure
- Identify the manifest that requires updating.
- Create new manifests to be applied in the manifest directories.
-
Create new manifests for resource deletion. It is not necessary to include the
specin these manifests. See Using manifests example to create new manifests using the example. -
Use the procedure in "Removing manifests for RPM systems" to create
deletesubdirectories and place the manifests created for resource deletion in this path.
2.2.4. Upgrading manifests for OSTree systems Copy linkLink copied to clipboard!
Use the following procedure to remove some resources while retaining others to preserve data.
For OSTree systems, the delete subdirectories are read-only.
Procedure
- Identify the manifest that needs updating.
- Create a new manifest to apply in the manifest directories. See Using manifests example to create new manifests using the example.
-
Create a new manifest for resource deletion to be placed in the
deletesubdirectories. - Use the procedure in "Removing manifests for OSTree systems" to remove the manifests.
Chapter 3. Using certificate manager on a MicroShift node Copy linkLink copied to clipboard!
The MicroShift certificate manager supports managing TLS certificates. This integration results in the issue, renewal, and management of certificate from certificate authorities.
3.1. MicroShift certificate manager functions Copy linkLink copied to clipboard!
With MicroShift certificate manager, you can complete the following tasks:
-
Automates certificate management: cert-manager creates or updates certificates and detects Kubernetes resources that are annotated with
cert-manager.io/kind. - Supports multiple CAs: provides flexibility to select one that fits the security and operational needs.
- Simplifies ingress certificates: cert-manager handles certificates for an ingress controller, which simplifies the configuration and management of secure communication channels.
- Enhances security: certificate management is automated and the risk of error is reduced. Certificates are current and valid, which contribute to a secure environment.
3.2. Installing and enabling the cert-manager Operator using RPM Copy linkLink copied to clipboard!
The microshift-cert-manager RPM is an optional component that can be installed at any time. Follow these steps to install and verify the certificate manager:
Procedure
Install the
cert-manager-operatorusing themicroshift-cert-managerRPM by running the following command:sudo dnf install microshift-cert-manager
$ sudo dnf install microshift-cert-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the certificate manager versions that are used by running the following command:
rpm -qi microshift-cert-manager
$ rpm -qi microshift-cert-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart MicroShift by running the following command:
systemctl microshift restart
$ systemctl microshift restartCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
microshift-cert-managerRPM is installed by running the following command:oc get deployment -n cert-manager-operator
$ oc get deployment -n cert-manager-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-operator-controller-manager 1/1 1 1 2d22h
NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-operator-controller-manager 1/1 1 1 2d22hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the`cert-manager` deployments are in a ready state and are up-to-date in the cert-manager namespace by running the following command:
oc get deployment -n cert-manager
$ oc get deployment -n cert-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 2d22h cert-manager-cainjector 1/1 1 1 2d22h cert-manager-webhook 1/1 1 1 2d22h
NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 2d22h cert-manager-cainjector 1/1 1 1 2d22h cert-manager-webhook 1/1 1 1 2d22hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pods are running in the
cert-managernamespace by running the following command:oc get pods -n cert-manager
$ oc get pods -n cert-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE cert-manager-7cfb4fbb84-qdmk8 1/1 Running 2 2d22h cert-manager-cainjector-854f669657-xzs8b 1/1 Running 2 2d22h cert-manager-webhook-68fd6d5f5c-j942h 1/1 Running 2 2d22h
NAME READY STATUS RESTARTS AGE cert-manager-7cfb4fbb84-qdmk8 1/1 Running 2 2d22h cert-manager-cainjector-854f669657-xzs8b 1/1 Running 2 2d22h cert-manager-webhook-68fd6d5f5c-j942h 1/1 Running 2 2d22hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Installing and enabling the cert-manager Operator using OLM Copy linkLink copied to clipboard!
You can install the optional microshift-cert-manager by using OLM at any time. For more information, see Using Operator Lifecycle Manager with MicroShift and Installing the cert-manager Operator for Red Hat OpenShift.
Chapter 4. Using MicroShift Observability Copy linkLink copied to clipboard!
MicroShift Observability collects and transmits system data for monitoring and analysis. The data includes performance and usage metrics, and error reporting.
4.1. Installing and enabling MicroShift Observability Copy linkLink copied to clipboard!
You can install MicroShift Observability at any time, including during the initial MicroShift installation. Observability collects and transmits system data for monitoring and analysis, such as performance and usage metrics and error reporting.
Procedure
Install the
microshift-observabilityRPM by entering the following command:sudo dnf install microshift-observability
$ sudo dnf install microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
microshift-observabilitysystem service by entering the following command:sudo systemctl enable microshift-observability
$ sudo systemctl enable microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
microshift-observabilitysystem service by entering the following command:sudo systemctl start microshift-observability
$ sudo systemctl start microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart MicroShift after the initial installation.
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The installation is successful if there is no output after you start the microshift-observability RPM.
4.2. Configuring MicroShift Observability Copy linkLink copied to clipboard!
You must configure MicroShift Observability after it is installed by specifying a valid endpoint. If an endpoint is not specified, MicroShift Observability does not start. You can specify any OpenTelemetry Protocol (OTLP)-compatible endpoint for each configuration before starting MicroShift.
Procedure
Update the
/etc/microshift/observability/opentelemetry-collector.yamlfile to specify an OTLP-compatible endpoint with the following information. The endpoint must link to an IP address or host name, and port number of an OTLP service.OTLP-compatible endpoint configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. An unreachable endpoint is reported in the MicroShift service logs. - 2
- Replace
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. An unreachable endpoint is reported in the MicroShift service logs.
Each time that you update the
opentelemetry-collector.yamlfile, you must restart MicroShift Observability to apply the updates.Restart MicroShift Observability by entering the following command:
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Selecting a MicroShift Observability configuration Copy linkLink copied to clipboard!
The amount and complexity of the data depends on predefined configurations. These configurations determine the number of data sources and the amount of collected data that is transmitted. These configurations are defined as small, medium, and large (default).
The opentelemetry-collector.yaml file includes specific parameters that are used to collect data for monitoring the system resources. All warnings for node events are included in the collected data. MicroShift Observability collects and transmits data for the following resources:
- CPU, memory, disk, and network metrics of containers, pods, and nodes
- Kubernetes events
- Host CPU, memory, disk, and network metrics
- System journals for certain MicroShift services, and dependencies
-
Metrics exposed by pods that have the
prometheus.io/scrape:trueannotation
Replace the values of the exporters.otlp.endpoint and services.telemetry.metrics.readers[0].endpoint fields with the IP address or hostname of the remote back end. This IP address resolves to the local node’s host name. Any unreachable endpoint is reported in the MicroShift observability service logs.
4.4. Selecting a small configuration Copy linkLink copied to clipboard!
You can configure MicroShift Observability to collect the smallest amount of performance and resource information from various sources by updating the YAML file.
Procedure
Select a small configuration by adding the following information to the
/etc/microshift/observability/opentelemetry-collector.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in the MicroShift service logs. - 2
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in the MicroShift service logs.
Restart MicroShift Observability to complete the configuration selection by entering the following command:
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Selecting a medium configuration Copy linkLink copied to clipboard!
You can configure MicroShift Observability to collect performance and resource information from various sources by updating the YAML file.
Procedure
Select a medium configuration by adding the following information to the
/etc/microshift/observability/opentelemetry-collector.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in themicroshift-observabilityservice logs. - 2
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in themicroshift-observabilityservice logs.
Restart MicroShift Observability to complete the configuration selection by entering the following command:
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Selecting a large configuration Copy linkLink copied to clipboard!
You can configure MicroShift Observability to collect the maximum amount of performance and resource information, from the maximum number of sources, by updating the YAML file.
Procedure
Select a large configuration by adding the following information to the
/etc/microshift/observability/opentelemetry-collector.yamlfile.Largeis the default configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in themicroshift-observabilityservice logs. - 2
- Replace the variable
${env:OTEL_BACKEND}with the IP address or hostname of the remote back end. This IP address resolves to the local node’s hostname. Any unreachable endpoint is reported in themicroshift-observabilityservice logs.
Restart MicroShift Observability to complete the configuration selection by entering the following command:
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Verifying the MicroShift Observability state Copy linkLink copied to clipboard!
After MicroShift Observability starts, you can verify the state by using a systemd service. The MicroShift Observability service logs are available as journald logs.
Procedure
Check the MicroShift Observability status by entering the following command:
sudo systemctl status microshift-observability
$ sudo systemctl status microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the MicroShift Observability logs by entering the following command:
sudo journalctl -u microshift-observability
$ sudo journalctl -u microshift-observabilityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Options for embedding applications in a RHEL for Edge image Copy linkLink copied to clipboard!
You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image to run in a MicroShift node. Embedded applications can be installed directly on edge devices to run in disconnected or offline environments.
5.1. Adding application RPMs to an rpm-ostree image Copy linkLink copied to clipboard!
If you have an application that includes APIs, container images, and configuration files for deployment such as manifests, you can build application RPMs. You can then add the RPMs to your RHEL for Edge system image.
The following is an outline of the procedures to embed applications or workloads in a fully self-contained operating system image:
- Build your own RPM that includes your application manifest.
- Add the RPM to the blueprint you used to install Red Hat build of MicroShift.
- Add the workload container images to the same blueprint.
- Create a bootable ISO.
For a step-by-step tutorial about preparing and embedding applications in a RHEL for Edge image, use the following tutorial:
5.2. Adding application manifests to an image for offline use Copy linkLink copied to clipboard!
If you have a simple application that includes a few files for deployment such as manifests, you can add those manifests directly to a RHEL for Edge system image.
See the "Create a custom file blueprint customization" section of the following RHEL for Edge documentation for an example:
5.3. Embedding applications for offline use Copy linkLink copied to clipboard!
If you have an application that includes more than a few files, you can embed the application for offline use. See the following procedure:
Chapter 6. Embedding applications for offline use Copy linkLink copied to clipboard!
You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. Embedding means you can run a MicroShift node in air-gapped, disconnected, or offline environments.
6.1. Embedding workload container images for offline use Copy linkLink copied to clipboard!
To embed container images in devices at the edge that do not have any network connection, you must create a new container, mount the ISO, and then copy the contents into the file system.
Prerequisites
- You have root access to the host.
- Application RPMs have been added to a blueprint.
-
You installed the OpenShift CLI (
oc).
Procedure
Render the manifests, extract all of the container image references, and translate the application image to blueprint container sources by running the following command:
oc kustomize ~/manifests | grep "image:" | grep -oE '[^ ]+$' | while read line; do echo -e "[[containers]]\nsource = \"${line}\"\n"; done >><my_blueprint>.toml$ oc kustomize ~/manifests | grep "image:" | grep -oE '[^ ]+$' | while read line; do echo -e "[[containers]]\nsource = \"${line}\"\n"; done >><my_blueprint>.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the updated blueprint to image builder by running the following command:
sudo composer-cli blueprints push <my_blueprint>.toml
$ sudo composer-cli blueprints push <my_blueprint>.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your workload containers are located in a private repository, you must provide image builder with the necessary pull secrets:
-
Set the
auth_file_pathin the[containers]section in the/etc/osbuild-worker/osbuild-worker.tomlconfiguration file to point to the pull secret. If needed, create a directory and file for the pull secret, for example:
Example directory and file
[containers] auth_file_path = "/<path>/pull-secret.json"
[containers] auth_file_path = "/<path>/pull-secret.json"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the custom location previously set for copying and retrieving images.
-
Set the
Build the container image by running the following command:
sudo composer-cli compose start-ostree <my_blueprint> edge-commit
$ sudo composer-cli compose start-ostree <my_blueprint> edge-commitCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Proceed with your preferred
rpm-ostreeimage flow, such as waiting for the build to complete, exporting the image and integrating it into yourrpm-ostreerepository or creating a bootable ISO.
Chapter 7. Embedding MicroShift applications tutorial Copy linkLink copied to clipboard!
The following tutorial gives a detailed example of how to embed applications in a RHEL for Edge image for use in a MicroShift node in various environments.
7.1. Embed application RPMs tutorial Copy linkLink copied to clipboard!
The following tutorial reviews the MicroShift installation steps and adds a description of the workflow for embedding applications. If you are already familiar with rpm-ostree systems such as Red Hat Enterprise Linux for Edge (RHEL for Edge) and MicroShift, you can go straight to the procedures.
7.1.1. Installation workflow review Copy linkLink copied to clipboard!
Embedding applications requires a similar workflow to embedding MicroShift into a RHEL for Edge image.
- The following image shows how system artifacts such as RPMs, containers, and files are added to a blueprint and used by the image composer to create an ostree commit.
- The ostree commit then can follow either the ISO path or the repository path to edge devices.
- The ISO path can be used for disconnected environments, while the repository path is often used in places were the network is usually connected.
Embedding MicroShift workflow
Reviewing these steps can help you understand the steps needed to embed an application:
- To embed MicroShift on RHEL for Edge, you added the MicroShift repositories to image builder.
- You created a blueprint that declared all the RPMs, container images, files and customizations you needed, including the addition of MicroShift.
-
You added the blueprint to image builder and ran a build with the image builder CLI tool (
composer-cli). This step createdrpm-ostreecommits, which were used to create the container image. This image contained RHEL for Edge. -
You added the installer blueprint to image builder to create an
rpm-ostreeimage (ISO) to boot from. This build contained both RHEL for Edge and MicroShift. - You downloaded the ISO with MicroShift embedded, prepared it for use, provisioned it, then installed it onto your edge devices.
7.1.2. Embed application RPMs workflow Copy linkLink copied to clipboard!
After you have set up a build host that meets the image builder requirements, you can add your application in the form of a directory of manifests to the image. After those steps, the simplest way to embed your application or workload into a new ISO is to create your own RPMs that include the manifests. Your application RPMs contain all of the configuration files describing your deployment.
The following "Embedding applications workflow" image shows how Kubernetes application manifests and RPM spec files are combined in a single application RPM build. This build becomes the RPM artifact included in the workflow for embedding MicroShift in an ostree commit.
Embedding applications workflow
The following procedures use the rpmbuild tool to create a specification file and local repository. The specification file defines how the package is built, moving your application manifests to the correct location inside the RPM package for MicroShift to pick them up. That RPM package is then embedded in the ISO.
7.1.3. Preparing to make application RPMs Copy linkLink copied to clipboard!
To build your own RPMs, choose a tool of your choice, such as the rpmbuild tool, and initialize the RPM build tree in your home directory. The following is an example procedure. If your RPMs are accessible to image builder, you can use the method you prefer to build the application RPMs.
Prerequisites
- You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.6 build host that meets the image builder system requirements.
- You have root access to the host.
Procedure
Install the
rpmbuildtool and create the yum repository for it by running the following command:sudo dnf install rpmdevtools rpmlint yum-utils createrepo
$ sudo dnf install rpmdevtools rpmlint yum-utils createrepoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the file tree you need to build RPM packages by running the following command:
rpmdev-setuptree
$ rpmdev-setuptreeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the directories to confirm creation by running the following command:
ls ~/rpmbuild/
$ ls ~/rpmbuild/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
BUILD RPMS SOURCES SPECS SRPMS
BUILD RPMS SOURCES SPECS SRPMSCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.4. Building the RPM package for the application manifests Copy linkLink copied to clipboard!
To build your own RPMs, you must create a spec file that adds the application manifests to the RPM package. The following is an example procedure. As long as the application RPMs and other elements needed for image building are accessible to image builder, you can use the method that you prefer.
Prerequisites
- You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.6 build host that meets the image builder system requirements.
- You have root access to the host.
- The file tree required to build RPM packages was created.
Procedure
In the
~/rpmbuild/SPECSdirectory, create a file such as<application_workload_manifests.spec>using the following template:Example spec file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
%installsection creates the target directory inside the RPM package,/usr/lib/microshift/manifests/and copies the manifests from the source home directory,~/manifests.
ImportantAll of the required YAML files must be in the source home directory
~/manifests, including akustomize.yamlfile if you are using kustomize.Build your RPM package in the
~/rpmbuild/RPMSdirectory by running the following command:rpmbuild -bb ~/rpmbuild/SPECS/<application_workload_manifests.spec>
$ rpmbuild -bb ~/rpmbuild/SPECS/<application_workload_manifests.spec>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.1.5. Adding application RPMs to a blueprint Copy linkLink copied to clipboard!
To add application RPMs to a blueprint, you must create a local repository that image builder can use to create the ISO. With this procedure, the required container images for your workload can be pulled over the network.
Prerequisites
- You have root access to the host.
-
Workload or application RPMs exist in the
~/rpmbuild/RPMSdirectory.
Procedure
Create a local RPM repository by running the following command:
createrepo ~/rpmbuild/RPMS/
$ createrepo ~/rpmbuild/RPMS/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Give image builder access to the RPM repository by running the following command:
sudo chmod a+rx ~
$ sudo chmod a+rx ~Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must ensure that image builder has all of the necessary permissions to access all of the files needed for image building, or the build cannot proceed.
Create the blueprint file,
repo-local-rpmbuild.tomlusing the following template:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify part of the path to create a location that you choose. Use this path in the later commands to set up the repository and copy the RPMs.
Add the repository as a source for image builder by running the following command:
sudo composer-cli sources add repo-local-rpmbuild.toml
$ sudo composer-cli sources add repo-local-rpmbuild.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the RPM to your blueprint, by adding the following lines:
… [[packages]] name = "<application_workload_manifests>" version = "*" …
… [[packages]] name = "<application_workload_manifests>"1 version = "*" …Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the name of your workload here.
Push the updated blueprint to image builder by running the following command:
sudo composer-cli blueprints push repo-local-rpmbuild.toml
$ sudo composer-cli blueprints push repo-local-rpmbuild.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow At this point, you can either run image builder to create the ISO, or embed the container images for offline use.
To create the ISO, start image builder by running the following command:
sudo composer-cli compose start-ostree repo-local-rpmbuild edge-commit
$ sudo composer-cli compose start-ostree repo-local-rpmbuild edge-commitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In this scenario, the container images are pulled over the network by the edge device during startup.
Chapter 8. Using greenboot for application and workload health checks Copy linkLink copied to clipboard!
You can use greenboot health checks to assess the health of your workloads and applications.
8.1. How workload health checks work Copy linkLink copied to clipboard!
Greenboot health checks are helpful on edge devices where direct serviceability is either limited or non-existent. You can use greenboot health checks to assess the health of your workloads and applications. These additional health checks are useful for software problem detection and automatic system rollbacks.
Workload or application health checks can use the MicroShift basic health check functions already implemented for the MicroShift core services. Creating your own comprehensive scripts for your applications is recommended. For example, you can write one that verifies that a service has started.
You can also use the microshift healthcheck command, which can run checks that the basic functions of the workload are operating as expected.
The following functions related to checking workload health in /usr/share/microshift/functions/greenboot.sh are deprecated and planned for removal in a future release:
-
wait_for -
namespace_images_downloaded -
namespace_deployment_ready -
namespace_daemonset_ready -
namespace_pods_ready -
namespace_pods_not_restarting -
print_failure_logs -
log_failure_cmd -
log_script_exit -
lvmsDriverShouldExist -
csiComponentShouldBeDeploy
8.2. How to use the MicroShift health check command Copy linkLink copied to clipboard!
The microshift healthcheck command checks whether a workload of the provided type exists and verifies its status for the specified timeout duration. The number of ready replicas, that is, pods, must match the expected amount.
To run the microshift healthcheck command successfully, use the following prerequisites:
- Execute commands from a root user account.
- Enable the MicroShift service.
You can add the following actions to the microshift healthcheck command:
-
-v=2to increase verbosity of the output -
--timeout="${WAIT_TIMEOUT_SECS}s"to override default 600s timeout value -
--namespace `<namespace>to specify the namespace of the workloads --deployments `<application-deployment>to check the readiness of a specific deploymentExample command
sudo microshift healthcheck -v=2 --timeout="300s" --namespace busybox --deployments busybox-deployment
$ sudo microshift healthcheck -v=2 --timeout="300s" --namespace busybox --deployments busybox-deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The microshift healthcheck command also accepts the following additional parameters to specify other kinds of workloads:
-
--daemonsets -
--statefulsets -
These options take a comma-delimited list of resources, for example,
--daemonsets ovnkube-master,ovnkube-node.
Alternatively, a --custom option can be used with a JSON string, for example:
sudo microshift healthcheck --custom '{"openshift-storage":{"deployments":
$ sudo microshift healthcheck --custom '{"openshift-storage":{"deployments":
["lvms-operator"], "daemonsets": ["vg-manager"]}, "openshift-ovn-kubernetes":
{"daemonsets": ["ovnkube-master", "ovnkube-node"]}}'
Example output
8.3. How to create a health check script for your application Copy linkLink copied to clipboard!
You can create workload or application health check scripts in the text editor of your choice. Save the scripts in the /etc/greenboot/check/required.d directory. When a script in the /etc/greenboot/check/required.d directory exits with an error, greenboot triggers a reboot in an attempt to heal the system.
Any script in the /etc/greenboot/check/required.d directory triggers a reboot if it exits with an error.
If your health check logic requires any post-check steps, you can also create additional scripts and save them in the relevant greenboot directories. For example:
-
You can also place shell scripts you want to run after a boot has been declared successful in
/etc/greenboot/green.d. -
You can place shell scripts you want to run after a boot has been declared failed in
/etc/greenboot/red.d. For example, if you have steps to heal the system before restarting, you can create scripts for your use case and place them in the/etc/greenboot/red.ddirectory.
8.3.1. Workload max duration or timeout script example Copy linkLink copied to clipboard!
The following example uses the MicroShift core services health check script as a template.
8.3.1.1. Basic prerequisites for creating a health check script Copy linkLink copied to clipboard!
- The workload must be installed.
- You must have root access.
8.3.1.2. Example and functional requirements Copy linkLink copied to clipboard!
You can start with the following example health check script. Add to it for your use case. In your custom workload health check script, you must define the relevant namespace, deployment, daemonset, and statefulset.
Choose a name prefix for your application that ensures it runs after the 40_microshift_running_check.sh script, which implements the MicroShift health check procedure for its core services.
Example greenboot health check script
Functions related to checking workload health previously included in the /usr/share/microshift/functions/greenboot.sh script file are deprecated. You can write a custom script, or use the microshift healthcheck command with various options instead. See "How workload health check scripts work" for more information.
8.3.2. Testing a workload health check script Copy linkLink copied to clipboard!
The output of the greenboot workload health check script varies with the host system type. Example outputs for Red Hat Enterprise Linux (RHEL) system types are included for reference only.
Prerequisites
- You have root access.
- You installed a workload.
- You created a health check script for the workload.
- The MicroShift service is enabled.
Procedure
To test that greenboot is running a health check script file, reboot the host by running the following command:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the output of greenboot health checks by running the following command:
sudo journalctl -o cat -u greenboot-healthcheck.service
$ sudo journalctl -o cat -u greenboot-healthcheck.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMicroShift core service health checks run before the workload health checks.
Example output for an image mode for RHEL system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example partial output for a RHEL for Edge system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example partial output for an RPM system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Automating application management with the GitOps controller Copy linkLink copied to clipboard!
GitOps with Argo CD for MicroShift is a lightweight, optional add-on controller derived from the Red Hat OpenShift GitOps Operator. GitOps for MicroShift uses the command-line interface (CLI) of Argo CD to interact with the GitOps controller that acts as the declarative GitOps engine. You can consistently configure and deploy Kubernetes-based infrastructure and applications across node and development lifecycles.
9.1. What you can do with the GitOps agent Copy linkLink copied to clipboard!
By using the GitOps with Argo CD agent with MicroShift, you can utilize the following principles:
Implement application lifecycle management.
- Create and manage your node and application configuration files using the core principles of developing and maintaining software in a Git repository.
- You can update the single repository and GitOps automates the deployment of new applications or updates to existing ones.
- For example, if you have 1,000 edge devices, each using MicroShift and a local GitOps agent, you can easily add or update an application on all 1,000 devices with just one change in your central Git repository.
- The Git repository contains a declarative description of the infrastructure you need in your specified environment and contains an automated process to make your environment match the described state.
- You can also use the Git repository as an audit trail of changes so that you can create processes based on Git flows such as review and approval for merging pull requests that implement configuration changes.
9.2. Creating GitOps applications on MicroShift Copy linkLink copied to clipboard!
You can create a custom YAML configuration to deploy and manage applications in your MicroShift service. To install the necessary packages to run GitOps applications, follow the documentation in "Installing the GitOps Argo CD manifests from an RPM package".
Prerequisites
-
You installed the
microshift-gitopspackages. -
The Argo CD pods are running in the
openshift-gitopsnamespace.
Procedure
Create a YAML file and add your customized configurations for the application:
Example YAML for a
spring-petclinicapplicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy the applications defined in the YAML file, run the following command:
oc apply -f <my_app.yaml>
$ oc apply -f <my_app.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<my_app.yaml>with the name of your application YAML.
Verification
To verify your application is deployed and synced, run the following command:
oc get applications -A
$ oc get applications -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow It might take a few minutes for the application to show the
Healthystatus.Example output
NAMESPACE NAME SYNC STATUS HEALTH STATUS openshift-gitops spring-petclinic Synced Healthy
NAMESPACE NAME SYNC STATUS HEALTH STATUS openshift-gitops spring-petclinic Synced HealthyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Limitations of using the GitOps agent with MicroShift Copy linkLink copied to clipboard!
GitOps with Argo CD for MicroShift has the following differences from the Red Hat OpenShift GitOps Operator:
-
The
gitops-operatorcomponent is not used with MicroShift. - To maintain the small resource use of MicroShift, the Argo CD web console is not available. You can use the Argo CD CLI.
- Because MicroShift is single-node, there is no multi-node support. Each instance of MicroShift is paired with a local GitOps agent.
-
The
oc adm must-gathercommand is not available in MicroShift.
9.4. Troubleshooting GitOps Copy linkLink copied to clipboard!
If you have problems with your GitOps controller, you can use the OpenShift CLI (oc) tool.
9.4.1. Debugging GitOps with oc adm inspect Copy linkLink copied to clipboard!
You can debug GitOps by using the OpenShift CLI (oc).
Prerequisites
-
The
occommand-line tool is installed.
Procedure
Run the
oc adm inspectcommand when in the GitOps namespace:oc adm inspect ns/openshift-gitops
$ oc adm inspect ns/openshift-gitopsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Gathering data for ns/openshift-gitops... W0501 20:34:35.978508 57625 util.go:118] the server doesn't have a resource type egressfirewalls, skipping the inspection W0501 20:34:35.980881 57625 util.go:118] the server doesn't have a resource type egressqoses, skipping the inspection W0501 20:34:36.040664 57625 util.go:118] the server doesn't have a resource type servicemonitors, skipping the inspection Wrote inspect data to inspect.local.2673575938140296280.
Gathering data for ns/openshift-gitops... W0501 20:34:35.978508 57625 util.go:118] the server doesn't have a resource type egressfirewalls, skipping the inspection W0501 20:34:35.980881 57625 util.go:118] the server doesn't have a resource type egressqoses, skipping the inspection W0501 20:34:36.040664 57625 util.go:118] the server doesn't have a resource type servicemonitors, skipping the inspection Wrote inspect data to inspect.local.2673575938140296280.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
-
If
oc adm inspectdid not provide the information you need, you can run an sos report.
Chapter 10. Pod security authentication and authorization with SCC Copy linkLink copied to clipboard!
Pod security admission is an implementation of the Kubernetes pod security standards. Use security content constraints (SCC) for pod security admission to restrict pod behavior.
10.1. Security context constraint synchronization with pod security standards Copy linkLink copied to clipboard!
MicroShift includes Kubernetes pod security admission.
In addition to the global pod security admission control configuration, a controller exists that applies pod security admission control warn and audit labels to namespaces according to the security context constraint (SCC) permissions of the service accounts that are in a given namespace.
Namespaces that are defined as part of the node payload have pod security admission synchronization disabled permanently. You can enable pod security admission synchronization on other namespaces as necessary. If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a cluster service version (CSV) is created in the namespace.
The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile found in the namespace to prevent warnings and audit logging as pods are created.
Namespace labeling is based on consideration of namespace-local service account privileges.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.
10.1.1. Viewing security context constraints in a namespace Copy linkLink copied to clipboard!
You can view the security context constraints (SCC) permissions in a given namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
To view the security context constraints in your namespace, run the following command:
oc get --show-labels namespace <namespace>
oc get --show-labels namespace <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Controlling pod security admission synchronization Copy linkLink copied to clipboard!
You can enable automatic pod security admission synchronization for most namespaces.
System defaults are not enforced when the security.openshift.io/scc.podSecurityLabelSync field is empty or set to false. You must set the label to true for synchronization to occur.
Namespaces that are defined as part of the node payload have pod security admission synchronization disabled permanently. These namespaces include:
-
default -
kube-node-lease -
kube-system -
kube-public -
openshift -
All system-created namespaces that are prefixed with
openshift-, except foropenshift-operatorsBy default, all namespaces that have anopenshift-prefix are not synchronized. You can enable synchronization for any user-createdopenshift-*namespaces. You cannot enable synchronization for any system-createdopenshift-*namespaces, except foropenshift-operators.
If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a node service version (CSV) is created in the namespace. The synchronized label inherits the permissions of the service accounts in the namespace.
Procedure
To enable pod security admission label synchronization in a namespace, set the value of the
security.openshift.io/scc.podSecurityLabelSynclabel totrue.Run the following command:
oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true
$ oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can use the --overwrite flag to reverse the effects of the pod security label synchronization in a namespace.
Chapter 11. Operators Copy linkLink copied to clipboard!
11.1. Using Operators with MicroShift Copy linkLink copied to clipboard!
You can use Operators with MicroShift to create applications that monitor the running services in your node. Operators can manage applications and their resources, such as deploying a database or message bus. As customized software running inside your node, Operators can be used to implement and automate common operations.
Operators offer a more localized configuration experience and integrate with Kubernetes APIs and CLI tools such as kubectl and oc. Operators are designed specifically for your applications. Operators enable you to configure components instead of modifying a global configuration file.
MicroShift applications are generally expected to be deployed in static environments. However, Operators are available if helpful in your use case. To determine the compatibility of an Operator with MicroShift, check the Operator documentation.
11.1.1. How to use Operators with a MicroShift node Copy linkLink copied to clipboard!
There are two ways to use Operators for your MicroShift node:
11.1.1.1. Manifests for Operators Copy linkLink copied to clipboard!
Operators can be installed and managed directly by using manifests. You can use the kustomize configuration management tool with MicroShift to deploy an application. Use the same steps to install Operators with manifests.
- See Using Kustomize manifests to deploy applications and Using manifests example for details.
11.1.1.2. Operator Lifecycle Manager for Operators Copy linkLink copied to clipboard!
You can also install add-on Operators to a MicroShift node by using Operator Lifecycle Manager (OLM). OLM can be used to manage both custom Operators and Operators that are widely available. Building catalogs is required to use OLM with MicroShift.
- For details, see Using Operator Lifecycle Manager with MicroShift.
11.2. Using Operator Lifecycle Manager with MicroShift Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) is used in MicroShift for installing and running optional add-on Operators. See the following link for more information:
11.2.1. Considerations for using OLM with MicroShift Copy linkLink copied to clipboard!
- Cluster Operators as applied in OpenShift Container Platform are not used in MicroShift.
You must create your own catalogs for the add-on Operators you want to use with your applications. Catalogs are not provided by default.
-
Each catalog must have an accessible
CatalogSourceadded to a node, so that the OLM catalog Operator can use the catalog for content.
-
Each catalog must have an accessible
You must use the CLI to conduct OLM activities with MicroShift. The console and OperatorHub GUIs are not available.
-
Use the Operator Package Manager
opmCLI with a network-connected node, or for building catalogs for custom Operators that use an internal registry. - To mirror your catalogs and Operators for disconnected or offline nodes, install the oc-mirror OpenShift CLI plugin.
-
Use the Operator Package Manager
Before using an Operator, verify with the provider that the Operator is supported on Red Hat build of MicroShift.
11.2.2. Determining your OLM installation type Copy linkLink copied to clipboard!
You can install the OLM package manager for use with MicroShift 4.15 or newer versions. There are different ways to install OLM for a MicroShift node, depending on your use case.
-
You can install the
microshift-olmRPM at the same time you install the MicroShift RPM on Red Hat Enterprise Linux (RHEL). -
You can install the
microshift-olmon an existing MicroShift 4.20. Restart the MicroShift service after installing OLM for the changes to apply. See Installing the Operator Lifecycle Manager (OLM) from an RPM package. - You can embed OLM in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. See Adding the Operator Lifecycle Manager (OLM) service to a blueprint.
11.2.3. Namespace use in MicroShift Copy linkLink copied to clipboard!
The microshift-olm RPM creates the three default namespaces: one for running OLM, and two for catalog and Operator installation. You can create additional namespaces as needed for your use case.
11.2.3.1. Default namespaces Copy linkLink copied to clipboard!
The following table lists the default namespaces and a brief description of how each namespace works.
| Default Namespace | Details |
|
| The OLM package manager runs in this namespace. |
|
|
The global namespace. Empty by default. To make the catalog source to be available globally to users in all namespaces, set the |
|
|
The default namespace where Operators run in MicroShift. Operators that reference catalogs in the |
11.2.3.2. Custom namespaces Copy linkLink copied to clipboard!
If you want to use a catalog and Operator together in a single namespace, then you must create a custom namespace. After you create the namespace, you must create the catalog in that namespace. All Operators running in the custom namespace must have the same single-namespace watch scope.
11.2.4. About building Operator catalogs Copy linkLink copied to clipboard!
To use Operator Lifecycle Manager (OLM) with MicroShift, you must build custom Operator catalogs that you can then manage with OLM. The standard catalogs that are included with OpenShift Container Platform are not included with MicroShift.
11.2.4.1. File-based Operator catalogs Copy linkLink copied to clipboard!
You can create catalogs for your custom Operators or filter catalogs of widely available Operators. You can combine both methods to create the catalogs needed for your specific use case. To run MicroShift with your own Operators and OLM, make a catalog by using the file-based catalog structure.
- For details, see Managing custom catalogs and Example catalog.
-
See also
opmCLI reference.
-
When adding a catalog source to a cluster, set the
securityContextConfigvalue torestrictedin thecatalogSource.yamlfile. Ensure that your catalog can run withrestrictedpermissions.
11.2.5. How to deploy Operators using OLM Copy linkLink copied to clipboard!
After you create and deploy your custom catalog, you must create a Subscription custom resource (CR) that can access the catalog and install the Operators you choose. Where Operators run depends on the namespace in which you create the Subscription CR.
Operators in OLM have a watch scope. For example, some Operators only support watching their own namespace, while others support watching every namespace in the node. All Operators installed in a given namespace must have the same watch scope.
11.2.5.1. Connectivity and OLM Operator deployment Copy linkLink copied to clipboard!
Operators can be deployed anywhere a catalog is running.
- For a node that is connected to the internet, mirroring images is not required. Images can be pulled over the network.
- For restricted networks in which MicroShift has access to an internal network only, images must be mirrored to an internal registry.
-
For use cases in which a MicroShift node is completely offline, all images must be embedded into an
osbuildblueprint.
11.2.5.2. Adding OLM-based Operators to a networked node using the global namespace Copy linkLink copied to clipboard!
To deploy different operators to different namespaces, use this procedure. For a MicroShift node that has network connectivity, Operator Lifecycle Manager (OLM) can access sources hosted on remote registries. The following procedure lists the basic steps of using configuration files to install an Operator that uses the global namespace.
To use an Operator installed in a different namespace, or in more than one namespace, make sure that the catalog source and the Subscription CR that references the Operator are running in the openshift-marketplace namespace.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - Operator Lifecycle Manager (OLM) is installed.
- You have created a custom catalog in the global namespace.
Procedure
Confirm that OLM is running by using the following command:
oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator
$ oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 2m24s
NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 2m24sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the OLM catalog Operator is running by using the following command:
oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator
$ oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 2m33s
NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 2m33sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The following steps assume you are using the global namespace, openshift-marketplace. The catalog must run in the same namespace as the Operator. The Operator must support the AllNamespaces mode.
Create the
CatalogSourceobject by using the following example YAML:Example catalog source YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The global namespace. Setting the
metadata.namespacetoopenshift-marketplaceenables the catalog to run in all namespaces. Subscriptions in any namespace can reference catalogs created in theopenshift-marketplacenamespace. - 2
- Community Operators are not installed by default with OLM for MicroShift. Listed here for example only.
- 3
- The value of
securityContextConfigmust be set torestrictedfor MicroShift.
Apply the
CatalogSourceconfiguration by running the following command:oc apply -f <catalog_source.yaml>
$ oc apply -f <catalog_source.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<catalog-source.yaml>with your catalog source configuration file name. In this example,catalogsource.yamlis used.
Example output
catalogsource.operators.coreos.com/operatorhubio-catalog created
catalogsource.operators.coreos.com/operatorhubio-catalog createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the catalog source is applied, check for the
READYstate by using the following command:oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog
$ oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The status is reported as
READY.
Confirm that the catalog source is running by using the following command:
oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog
$ oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE operatorhubio-catalog-x24nh 1/1 Running 0 59s
NAME READY STATUS RESTARTS AGE operatorhubio-catalog-x24nh 1/1 Running 0 59sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Subscription CR configuration file by using the following example YAML:
Example Subscription custom resource YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The global namespace. Setting the
sourceNamespacevalue toopenshift-marketplaceenables Operators to run in multiple namespaces if the catalog also runs in theopenshift-marketplacenamespace.
Apply the Subscription CR configuration by running the following command:
oc apply -f <subscription_cr.yaml>
$ oc apply -f <subscription_cr.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<subscription_cr.yaml>with your Subscription CR filename.
Example output
subscription.operators.coreos.com/my-cert-manager created
subscription.operators.coreos.com/my-cert-manager createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You can create a configuration file for the specific Operand you want to use and apply it now.
Verification
Verify that your Operator is running by using the following command:
oc get pods -n openshift-operators
$ oc get pods -n openshift-operators1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace from the Subscription CR is used.
NoteAllow a minute or two for the Operator start.
Example output
NAME READY STATUS RESTARTS AGE cert-manager-7df8994ddb-4vrkr 1/1 Running 0 19s cert-manager-cainjector-5746db8fd7-69442 1/1 Running 0 18s cert-manager-webhook-f858bf58b-748nt 1/1 Running 0 18s
NAME READY STATUS RESTARTS AGE cert-manager-7df8994ddb-4vrkr 1/1 Running 0 19s cert-manager-cainjector-5746db8fd7-69442 1/1 Running 0 18s cert-manager-webhook-f858bf58b-748nt 1/1 Running 0 18sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2.5.3. Adding OLM-based Operators to a networked node in a specific namespace Copy linkLink copied to clipboard!
Use this procedure if you want to specify a namespace for an Operator, for example, olm-microshift. In this example, the catalog is scoped and available in the global openshift-marketplace namespace. The Operator uses content from the global namespace, but runs only in the olm-microshift namespace. For a MicroShift node that has network connectivity, Operator Lifecycle Manager (OLM) can access sources hosted on remote registries.
All of the Operators installed in a specific namespace must have the same watch scope. In this case, the watch scope is OwnNamespace.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - Operator Lifecycle Manager (OLM) is installed.
- You have created a custom catalog that is running in the global namespace.
Procedure
Confirm that OLM is running by using the following command:
oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator
$ oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 16m
NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 16mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the OLM catalog Operator is running by using the following command:
oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator
$ oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 16m
NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 16mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace by using the following example YAML:
Example namespace YAML
apiVersion: v1 kind: Namespace metadata: name: olm-microshift
apiVersion: v1 kind: Namespace metadata: name: olm-microshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the namespace configuration using the following command:
oc apply -f <ns.yaml>
$ oc apply -f <ns.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<ns.yaml>with the name of your namespace configuration file. In this example,olm-microshiftis used.
Example output
namespace/olm-microshift created
namespace/olm-microshift createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Operator group YAML by using the following example YAML:
Example Operator group YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For Operators using the global namespace, omit the
spec.targetNamespacesfield and values.
Apply the Operator group configuration by running the following command:
oc apply -f <og.yaml>
$ oc apply -f <og.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<og.yaml>with the name of your operator group configuration file.
Example output
operatorgroup.operators.coreos.com/og created
operatorgroup.operators.coreos.com/og createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
CatalogSourceobject by using the following example YAML:Example catalog source YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The global namespace. Setting the
metadata.namespacetoopenshift-marketplaceenables the catalog to run in all namespaces. Subscriptions CRs in any namespace can reference catalogs created in theopenshift-marketplacenamespace. - 2
- Community Operators are not installed by default with OLM for MicroShift. Listed here for example only.
- 3
- The value of
securityContextConfigmust be set torestrictedfor MicroShift.
Apply the
CatalogSourceconfiguration by running the following command:oc apply -f <catalog_source.yaml>
$ oc apply -f <catalog_source.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<catalog_source.yaml>with your catalog source configuration file name.
To verify that the catalog source is applied, check for the
READYstate by using the following command:oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog
$ oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The status is reported as
READY.
Confirm that the catalog source is running by using the following command:
oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog
$ oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE operatorhubio-catalog-j7sc8 1/1 Running 0 43s
NAME READY STATUS RESTARTS AGE operatorhubio-catalog-j7sc8 1/1 Running 0 43sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Subscription CR configuration file by using the following example YAML:
Example Subscription custom resource YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Subscription CR configuration by running the following command:
oc apply -f <subscription_cr.yaml>
$ oc apply -f <subscription_cr.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<subscription_cr.yaml>with the name of the Subscription CR configuration file.
Example output
subscription.operators.coreos.com/my-gitlab-operator-kubernetes
subscription.operators.coreos.com/my-gitlab-operator-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You can create a configuration file for the specific Operand you want to use and apply it now.
Verification
Verify that your Operator is running by using the following command:
oc get pods -n olm-microshift
$ oc get pods -n olm-microshift1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace from the Subscription CR is used.
NoteAllow a minute or two for the Operator start.
Example output
NAME READY STATUS RESTARTS AGE gitlab-controller-manager-69bb6df7d6-g7ntx 2/2 Running 0 3m24s
NAME READY STATUS RESTARTS AGE gitlab-controller-manager-69bb6df7d6-g7ntx 2/2 Running 0 3m24sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Creating custom Operator catalogs using the oc-mirror plugin Copy linkLink copied to clipboard!
You can create custom catalogs with widely available Operators and mirror them by using the oc-mirror OpenShift CLI (oc) plugin.
11.3.1. Using Red Hat-provided Operator catalogs and mirror registries Copy linkLink copied to clipboard!
You can filter catalogs and delete images to get specific Operators and mirror them by using the oc-mirror OpenShift CLI (oc) plugin. You can also use Operators in disconnected settings or embedded in a Red Hat Enterprise Linux (RHEL) image.
- To understand more about how to configure your systems for mirroring, follow the links in the following "Additional resources" section.
- If you are ready to deploy Operators from Red Hat-provided Operator catalogs, mirror them, or to embed them in a RHEL image, start with the following section, "Inspecting catalog contents by using the oc-mirror plugin."
11.3.2. About the oc-mirror plugin for creating a mirror registry Copy linkLink copied to clipboard!
You can use the oc-mirror OpenShift CLI (oc) plugin with MicroShift to filter and delete images from Operator catalogs. You can then mirror the filtered catalog contents to a mirror registry or use the container images in disconnected or offline deployments.
The procedure to mirror content from Red Hat-hosted registries connected to the internet to a disconnected image registry is the same, independent of the registry you select. After you mirror the contents of your catalog, configure each node to retrieve this content from your mirror registry.
11.3.2.1. Connectivity considerations when populating a mirror registry Copy linkLink copied to clipboard!
When you populate your registry, you can use one of following connectivity scenarios:
- Connected mirroring
- If you have a host that can access both the internet and your mirror registry, but not your node, you can directly mirror the content from that machine.
- Disconnected mirroring
If you do not have a host that can access both the internet and your mirror registry, you must mirror the images to a file system and then bring that host or removable media into your disconnected environment.
ImportantA container registry must be reachable by every machine in the node that you provision. Installing, updating, and other operations, such as relocating workloads, fail if the registry is unreachable.
To avoid problems caused by an unreachable registry, use the following standard practices:
- Run mirror registries in a highly available way.
- Ensure that the mirror registry at least matches the production availability of your node.
11.3.3. Inspecting catalog contents by using the oc-mirror plugin Copy linkLink copied to clipboard!
Use the following example procedure to select a catalog and list OpenShift Container Platform Operators to add to your oc-mirror plugin image set configuration file. You must use oc mirror v1 to selecting a catalog and listing Operators.
If you use your own catalogs and Operators, you can push the images directly to your internal registry.
Prerequisites
-
You uninstalled OpenShift CLI (
oc). - You installed the Operator Lifecycle Manager (OLM).
- You installed the oc-mirror plugin.
Procedure
Get a list of available Red Hat-provided Operator catalogs to filter by running the following command:
oc mirror list operators --version 4.20 --catalogs
$ oc mirror list operators --version 4.20 --catalogsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get a list of Operators in the Red Hat Operators catalog by running the following command:
oc mirror list operators <--catalog=<catalog_source>>
$ oc mirror list operators <--catalog=<catalog_source>>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies your catalog source, such as
registry.redhat.io/redhat/redhat-operator-index:v4.20orquay.io/operatorhubio/catalog:latest.
-
Select an Operator. This example uses the
amq-broker-rhel9Operator. Optional: To inspect the channels and versions of the Operator you want to filter, enter the following commands:
Get a list of channels by running the following command:
oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.20 --package=amq-broker-rhel9
$ oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.20 --package=amq-broker-rhel9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a list of versions within a channel by running the following command:
oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.20 --package=amq-broker-rhel9 --channel=7.13.x
$ oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.20 --package=amq-broker-rhel9 --channel=7.13.xCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Create and edit an image set configuration file using the information gathered in this procedure.
- Mirror the images from the transformed image set configuration file to a mirror registry or disk.
11.3.4. Creating an image set configuration file Copy linkLink copied to clipboard!
You must create an ImageSetConfiguration YAML file. This image set configuration file specifies both the Operators to mirror and the configuration settings for the oc-mirror plugin. Edit the contents of the image set configuration file so that the entries are compatible with both MicroShift and the Operator you plan to use.
oc mirror v2 uses a cache system instead of metadata. The cache system prevents the need to start the entire mirroring process over when a single step fails. Instead, you can troubleshoot the failed step and the process does not re-mirror images that existed before the failure.
Prerequisites
You created a container image registry credentials file. For more information, see the following reference:
Procedure
Create and edit the
ImageSetConfigurationYAML for MicroShift by using the following example as a guide:Example edited MicroShift image set configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the Operator catalog to retrieve images from.
- 2
- Specify the Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog.
- 3
- Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command:
oc mirror list operators --catalog=<catalog_name> --package=<package_name>. - 4
- Specify any additional images to include in the image set. If you do not need to specify additional images, delete this field.
ImportantThe
platformfield, related fields, and Helm are not supported by MicroShift.-
Save the updated file as
ImageSetConfiguration.yaml.
Next steps
- Use the oc-mirror plugin to mirror an image set directly to a target mirror registry.
- Configure CRI-O.
- Apply the catalog sources to your node.
11.3.4.1. ImageSet configuration parameters for oc-mirror plugin v2 Copy linkLink copied to clipboard!
The oc-mirror plugin v2 requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource.
Using the minVersion and maxVersion properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message states that there are multiple channel heads. This is because when the filter is applied, the update graph of the Operator is truncated.
OLM requires that every Operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the Operator. When the filter range is applied, that graph can turn into two or more separate graphs or a graph that has more than one end point.
To avoid this error, do not filter out the latest version of an Operator. If you still run into the error, depending on the Operator, either the maxVersion property must be increased or the minVersion property must be decreased. Because every Operator graph can be different, you might need to adjust these values until the error resolves.
| Parameter | Description | Values |
|---|---|---|
|
|
The API version of the |
String Example: |
|
| The configuration of the image set. | Object |
|
| The additional images configuration of the image set. | Array of objects Example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest
|
|
| The tag or digest of the image to mirror. |
String Example: |
|
| List of images with a tag or digest (SHA) to block from mirroring. |
Array of strings Example: |
|
| The Operators configuration of the image set. | Array of objects Example: operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:4.20
packages:
- name: elasticsearch-operator
minVersion: '2.4.0'
|
|
| The Operator catalog to include in the image set. |
String Example: |
|
|
When |
Boolean The default value is |
|
| The Operator packages configuration. | Array of objects Example: operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:4.20
packages:
- name: elasticsearch-operator
minVersion: '5.2.3-31'
|
|
| The Operator package name to include in the image set. |
String Example: |
|
| Operator package channel configuration | Object |
|
| The Operator channel name, unique within a package, to include in the image set. |
String Eample: |
|
| The highest version of the Operator mirror across all channels in which it exists. |
String Example: |
|
| The lowest version of the Operator to mirror across all channels in which it exists |
String Example: |
|
| The highest version of the Operator to mirror across all channels in which it exists. |
String Example: |
|
| The lowest version of the Operator to mirror across all channels in which it exists. |
String Example: |
|
| An alternative name and optional namespace hierarchy to mirror the referenced catalog as |
String Example: |
|
| Path on disk for a template to use to complete catalogSource custom resource generated by oc-mirror plugin v2. |
String Example: |
|
|
An alternative tag to append to the |
String Example: |
11.3.4.1.1. DeleteImageSetConfiguration parameters Copy linkLink copied to clipboard!
To use remove images with the oc-mirror plugin v2, you must use a DeleteImageSetConfiguration.yaml configuration file that defines which images to delete from the mirror registry. The following table lists the available parameters for the DeleteImageSetConfiguration resource.
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the |
String Example: |
|
| The configuration of the image set to delete. | Object |
|
| The additional images configuration of the delete image set. | Array of objects Example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest
|
|
| The tag or digest of the image to delete. |
String Example: |
|
| The Operators configuration of the delete image set. | Array of objects Example: operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version}
packages:
- name: elasticsearch-operator
minVersion: '2.4.0'
|
|
| The Operator catalog to include in the delete image set. |
String Example: |
|
| When true, deletes the full catalog, Operator package, or Operator channel. |
Boolean The default value is |
|
| Operator packages configuration | Array of objects Example: operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version}
packages:
- name: elasticsearch-operator
minVersion: '5.2.3-31'
|
|
| The Operator package name to include in the delete image set. |
String Example: |
|
| Operator package channel configuration | Object |
|
| The Operator channel name, unique within a package, to include in the delete image set. |
String Example: |
|
| The highest version of the Operator to delete within the selected channel. |
String Example: |
|
| The lowest version of the Operator to delete within the selection in which it exists. |
String Example: |
|
| The highest version of the Operator to delete across all channels in which it exists. |
String Example: |
|
| The lowest version of the Operator to delete across all channels in which it exists. |
String Example: |
11.3.5. Mirroring from mirror to mirror Copy linkLink copied to clipboard!
You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation.
Prerequisites
- You have access to the internet to get the required container images.
-
You installed the OpenShift CLI (
oc). -
You installed the
oc-mirrorCLI plugin. - You created the image set configuration file.
Procedure
Mirror the images from the specified image set configuration to a specified registry by running the following command:
oc-mirror --config imageset-config.yaml --workspace file://<emphasis><v2_workspace></emphasis> \ docker://<emphasis><remote_registry></emphasis> --v2
$ oc-mirror --config imageset-config.yaml --workspace file://<emphasis><v2_workspace></emphasis> \1 docker://<emphasis><remote_registry></emphasis> --v22 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must use the
--workspaceflag for the mirror-to-mirror process. Replace <v2_workspace> with the directory you want to use to store custom resources for the mirroring process. - 2
- Replace <remote_registry> with the name of the registry to mirror the image set file to. The registry must start with
docker://. If you specify a top-level namespace for the mirror registry, you must also use this same namespace on later executions.
Example output
Rendering catalog image "registry.example.com/redhat/redhat-operator-index:v{ocp-version}" with file-based catalogRendering catalog image "registry.example.com/redhat/redhat-operator-index:v{ocp-version}" with file-based catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must use the
ImageDigestMirrorSetYAML file as reference content for manual configuration of CRI-O in MicroShift. You cannot apply the resource directly into a MicroShift node.
Verification
List the contents of the
cluster-resourcessubdirectory by running the following command:ls <v2_workspace>/working-dir/cluster-resources/
$ ls <v2_workspace>/working-dir/cluster-resources/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <v2_workspace> with the directory you used to store custom resources for the mirroring process.
Next steps
-
Convert the
ImageDigestMirrorSetYAML content for use in manually configuring CRI-O. - If required, mirror the images from mirror to disk for disconnected or offline use.
Troubleshooting
11.3.6. Configuring CRI-O for using a registry mirror for Operators Copy linkLink copied to clipboard!
You must transform the ImageDigestMirrorSet YAML file created with the oc-mirror plugin into a format that is compatible with the CRI-O container runtime configuration used by MicroShift.
Prerequisites
-
The OpenShift CLI (
oc) is installed. - You installed Operator Lifecycle Manager (OLM).
- You installed the oc-mirror plugin.
-
You installed the
yqbinary. -
The
ImageDigestMirrorSetandCatalogSourceYAML files are available in thecluster-resourcessubdirectory.
Procedure
Confirm the contents of the
ImageDigestMirrorSetYAML file by running the following command:cat <v2_workspace>/working-dir/cluster-resources/imagedigestmirrorset.yaml
$ cat <v2_workspace>/working-dir/cluster-resources/imagedigestmirrorset.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <v2_workspace> with the directory name that you used when you generated mirroring resources.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Transform the
imagedigestmirrorset.yamlinto a format ready for CRI-O configuration by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[[registry]] prefix = "registry.redhat.io/amq7" location = "registry.example.com/amq7" mirror-by-digest-only = true insecure = true[[registry]] prefix = "registry.redhat.io/amq7" location = "registry.example.com/amq7" mirror-by-digest-only = true insecure = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the output to the CRI-O configuration file in the
/etc/containers/registries.conf.d/directory:Example
crio-config.yamlmirror configuration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the hostname and port of your mirror registry server, for example
microshift-quay:8443.
Apply the CRI-O configuration changes by restarting MicroShift with the following command:
sudo systemctl restart crio
$ sudo systemctl restart crioCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3.7. Installing a custom catalog created with the oc-mirror plugin Copy linkLink copied to clipboard!
After you mirror your image set to the mirror registry, you must apply the generated CatalogSource custom resource (CR) into the node. Operator Lifecycle Manager (OLM) uses the CatalogSource CR to retrieve information about the available Operators in the mirror registry. You must then create and apply a subscription CR to subscribe to your custom catalog.
Prerequisites
- You mirrored the image set to your registry mirror.
- You added image reference information to the CRI-O container runtime configuration.
Procedure
Apply the catalog source configuration file from the results directory to create the catalog source object by running the following command:
oc apply -f ./<v2_workspace>/working-dir/cluster-resources/catalogSource-cs-redhat-catalog.yaml
$ oc apply -f ./<v2_workspace>/working-dir/cluster-resources/catalogSource-cs-redhat-catalog.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <v2_workspace> with the directory you used to store custom resources for the mirroring process.
Example output
catalogsource.operators.coreos.com/cs-redhat-catalog created
catalogsource.operators.coreos.com/cs-redhat-catalog createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow For reference, see the following example file:
Example catalog source configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the global namespace. Setting the
metadata.namespacetoopenshift-marketplaceenables the catalog to reference catalogs in all namespaces. Subscriptions in any namespace can reference catalogs created in theopenshift-marketplacenamespace.
Verify that the
CatalogSourceresources were successfully installed by running the following command:oc get catalogsource --all-namespaces
$ oc get catalogsource --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME DISPLAY TYPE PUBLISHER AGE openshift-marketplace certified-operators Certified Operators grpc Red Hat 37m openshift-marketplace community-operators Community Operators grpc Red Hat 37m openshift-marketplace redhat-marketplace Red Hat Marketplace grpc Red Hat 37m openshift-marketplace redhat-catalog Red Hat Catalog grpc Red Hat 37m
NAMESPACE NAME DISPLAY TYPE PUBLISHER AGE openshift-marketplace certified-operators Certified Operators grpc Red Hat 37m openshift-marketplace community-operators Community Operators grpc Red Hat 37m openshift-marketplace redhat-marketplace Red Hat Marketplace grpc Red Hat 37m openshift-marketplace redhat-catalog Red Hat Catalog grpc Red Hat 37mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the catalog source is running by using the following command:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE cs-redhat-catalog-4227b 2/2 Running 0 2m5s
NAME READY STATUS RESTARTS AGE cs-redhat-catalog-4227b 2/2 Running 0 2m5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
SubscriptionCR, similar to the following example:Example
SubscriptionCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Subscription CR configuration by running the following command:
oc apply -f ./<subscription_cr.yaml>
$ oc apply -f ./<subscription_cr.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your subscription in <subscription_cr.yaml>, for example
amq—broker-subscription-cr.yaml.
Example output
subscription.operators.coreos.com/amq-broker created
subscription.operators.coreos.com/amq-broker createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Adding OLM-based Operators to a disconnected node Copy linkLink copied to clipboard!
You can use OLM-based Operators in disconnected situations by embedding them in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image.
11.4.1. About adding OLM-based Operators to a disconnected node Copy linkLink copied to clipboard!
For Operators that are installed on disconnected nodes, Operator Lifecycle Manager (OLM) by default cannot access sources hosted on remote registries because those remote sources require full internet connectivity. Therefore, you must mirror the remote registries to a highly available container registry.
The following steps are required to use OLM-based Operators in disconnected situations:
- Include OLM in the container image list for your mirror registry.
-
Configure the system to use your mirror registry by updating your CRI-O configuration directly.
ImageContentSourcePolicyis not supported in MicroShift. -
Add a
CatalogSourceobject to the node so that the OLM catalog Operator can use the local catalog on the mirror registry. - Ensure that MicroShift is installed to run in a disconnected capacity.
- Ensure that the network settings are configured to run in disconnected mode.
After enabling OLM in a disconnected node, you can continue to use your internet-connected workstation to keep your local catalog sources updated as newer versions of Operators are released.
11.4.1.1. Performing a dry run Copy linkLink copied to clipboard!
You can use oc-mirror to perform a dry run, without actually mirroring any images. A dry run means you can review the list of images to be mirrored. You can catch any errors with your image set configuration early by using a dry run, or use the generated list of images with other tools to conduct mirroring.
Prerequisites
- You have access to the internet to obtain the necessary container images.
-
You installed the OpenShift CLI (
oc). - You installed the oc-mirror CLI plugin.
- You created the image set configuration file.
Procedure
Run the
oc mirrorcommand with the--dry-runflag to perform a dry run:oc-mirror --config <ImageSetConfig.yaml> docker://localhost:5000 --workspace file://<outm2m> --dry-run --v2
$ oc-mirror --config <ImageSetConfig.yaml> docker://localhost:5000 --workspace file://<outm2m> --dry-run --v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
ImageSetConfig.yaml- Specifies the name of the image set configuration file that you created.
docker://localhost:5000-
Specifies the mirror registry. Nothing is mirrored to this registry when you use the
--dry-runflag. --workspace file://<outm2m>- Insert the address of the workspace path.
--dry-run- The dry run flag generates the dry run artifacts and not an actual image set file.
--v2Specifies oc mirror v2.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Review the
mapping.txtfile that was generated by running the following command:cat wspace/working-dir/dry-run/mapping.txt
$ cat wspace/working-dir/dry-run/mapping.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:47fd4ce2533496828aba37bd1f9715e2164d5c90bd0fc6b25e7e0786d723bf01=docker://mirror.com/amq8/amq-broker-rhel9:sha256-47fd4ce2533496828aba37bd1f9715e2164d5c90bd0fc6b25e7e0786d723bf01 docker://registry.redhat.io/amq8/amq-broker-init-rhel9@sha256:9cc48eecf1442ae04b8543fa5d4381a13bc2831390850828834d387006d1342b=docker://mirror.com/amq7/amq-broker-init-rhel9:sha256-9cc48eecf1442ae04b8543fa5d4381a13bc2831390850828834d387006d1342b docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:bb6fbd68475a7852b4d99eea6c4ab313f9267da7963162f0d75375d7063409e7=docker://mirror.com/amq8/amq-broker-rhel9:sha256-bb6fbd68475a7852b4d99eea6c4ab313f9267da7963162f0d75375d7063409e7 docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:d42d713da0ce6806fdc6492b6342586783e6865a82a8647d3c4288439b1751ee=docker://mirror.com/amq8/amq-broker-rhel9:sha256-d42d713da0ce6806fdc6492b6342586783e6865a82a8647d3c4288439b1751ee docker://registry.redhat.io/amq8/amq-broker-init-rhel9@sha256:ffffa9875f0379e9373f89f05eb06e5a193273bb04bc3aa5f85b044357b79098=docker://mirror.com/amq8/amq-broker-init-rhel9:sha256-ffffa9875f0379e9373f89f05eb06e5a193273bb04bc3aa5f85b044357b79098
docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:47fd4ce2533496828aba37bd1f9715e2164d5c90bd0fc6b25e7e0786d723bf01=docker://mirror.com/amq8/amq-broker-rhel9:sha256-47fd4ce2533496828aba37bd1f9715e2164d5c90bd0fc6b25e7e0786d723bf01 docker://registry.redhat.io/amq8/amq-broker-init-rhel9@sha256:9cc48eecf1442ae04b8543fa5d4381a13bc2831390850828834d387006d1342b=docker://mirror.com/amq7/amq-broker-init-rhel9:sha256-9cc48eecf1442ae04b8543fa5d4381a13bc2831390850828834d387006d1342b docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:bb6fbd68475a7852b4d99eea6c4ab313f9267da7963162f0d75375d7063409e7=docker://mirror.com/amq8/amq-broker-rhel9:sha256-bb6fbd68475a7852b4d99eea6c4ab313f9267da7963162f0d75375d7063409e7 docker://registry.redhat.io/amq8/amq-broker-rhel9@sha256:d42d713da0ce6806fdc6492b6342586783e6865a82a8647d3c4288439b1751ee=docker://mirror.com/amq8/amq-broker-rhel9:sha256-d42d713da0ce6806fdc6492b6342586783e6865a82a8647d3c4288439b1751ee docker://registry.redhat.io/amq8/amq-broker-init-rhel9@sha256:ffffa9875f0379e9373f89f05eb06e5a193273bb04bc3aa5f85b044357b79098=docker://mirror.com/amq8/amq-broker-init-rhel9:sha256-ffffa9875f0379e9373f89f05eb06e5a193273bb04bc3aa5f85b044357b79098Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4.1.2. Getting catalogs and Operator container image references Copy linkLink copied to clipboard!
After performing a dry run with the oc-mirror plugin to review the list of images that you want to mirror, you must get all of the container image references, then format the output for adding to an image builder blueprint.
For catalogs made for proprietary Operators, you can format image references for the image builder blueprint without using the following procedure.
Prerequisites
- You have a catalog index for the Operators you want to use.
-
You have installed the
jqCLI tool. - You are familiar with image builder blueprint files.
- You have an image builder blueprint TOML file.
Procedure
Parse the catalog
index.jsonfile to get the image references that you need to include in the image builder blueprint. You can use either the unfiltered catalog or you can filter out images that cannot be mirrored:Parse the unfiltered catalog
index.jsonfile to get the image references by running the following command:jq -r --slurp '.[] | select(.relatedImages != null) | "[[containers]]\nsource = \"" + .relatedImages[].image + "\"\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.20/index/index.json
jq -r --slurp '.[] | select(.relatedImages != null) | "[[containers]]\nsource = \"" + .relatedImages[].image + "\"\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.20/index/index.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to filter out images that cannot be mirrored, filter and parse the catalog
index.jsonfile by running the following command:jq -r --slurp '.[] | select(.relatedImages != null) | .relatedImages[] | select(.name | contains("ppc") or contains("s390x") | not) | "[[containers]]\\nsource = \\"" + .image + "\\"\\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.20/index/index.json$ jq -r --slurp '.[] | select(.relatedImages != null) | .relatedImages[] | select(.name | contains("ppc") or contains("s390x") | not) | "[[containers]]\\nsource = \\"" + .image + "\\"\\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.20/index/index.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis step uses the AMQ Broker Operator as an example. You can add other criteria to the
jqcommand for further filtering as required by your use case.Example image-reference output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantFor mirrored and disconnected use cases, ensure that all of the sources filtered from your catalog
index.jsonfile are digests. If any of the sources use tags instead of digests, the Operator installation fails. Tags require an internet connection.
View the
imageset-config.yamlto get the catalog image reference for theCatalogSourcecustom resource (CR) by running the following command:cat imageset-config.yaml
$ cat imageset-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the value in the
mirror.catalogcatalog image reference for the followingjqcommand to get the image digest. In this example, <registry.redhat.io/redhat/redhat-operator-index:v4.20>.
Get the SHA of the catalog index image by running the following command:
skopeo inspect docker://<registry.redhat.io/redhat/redhat-operator-index:v{product-version}> | jq .Digest$ skopeo inspect docker://<registry.redhat.io/redhat/redhat-operator-index:v{product-version}> | jq .Digest1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the value in the
mirror.catalogcatalog image reference for thejqcommand to get the image digest. In this example, <registry.redhat.io/redhat/redhat-operator-index:v4.20>.
Example output
"sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6"
"sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6"Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get ready to add the image references to your image builder blueprint file, format the catalog image reference by using the following example:
[[containers]] source = "registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6"
[[containers]] source = "registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the image references from all of the previous steps to the image builder blueprint.
Generated image builder blueprint example snippet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- References for all non-optional MicroShift RPM packages using the same version compatible with the
microshift-release-infoRPM. - 2
- References for automatically enabling MicroShift on system startup and applying default networking settings.
- 3
- References for all non-optional MicroShift container images necessary for a disconnected deployment.
- 4
- References for the catalog index.
11.4.1.3. Applying catalogs and Operators in a disconnected-deployment RHEL for Edge image Copy linkLink copied to clipboard!
After you have created a RHEL for Edge image for a disconnected environment and configured MicroShift networking settings for disconnected use, you can configure the namespace and create catalog and Operator custom resources (CR) for running your Operators.
Prerequisites
- You have a RHEL for Edge image.
- Networking is configured for disconnected use.
- You completed the oc-mirror plugin dry run procedure.
Procedure
Create a
CatalogSourcecustom resource (CR), similar to the following example:Example
my-catalog-source-cr.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The global namespace. Setting the
metadata.namespacetoopenshift-marketplaceenables the catalog to run in all namespaces. Subscriptions in any namespace can reference catalogs created in theopenshift-marketplacenamespace.
NoteThe default pod security admission definition for
openshift-marketplaceisbaseline, therefore a catalog source custom resource (CR) created in that namespace does not require aspec.grpcPodConfig.securityContextConfigvalue to be set. You can set alegacyorrestrictedvalue if required for the namespace and Operators you want to use.Add the SHA of the catalog index commit to the Catalog Source (CR), similar to the following example:
Example namespace
spec.imageconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The SHA of the image commit. Use the same SHA you added to the image builder blueprint.
ImportantYou must use the SHA instead of a tag in your catalog CR or the pod fails to start.
Apply the YAML file from the oc-mirror plugin dry run results directory to the node by running the following command:
oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yaml
$ oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
catalogsource.operators.coreos.com/cs-redhat-operator-index created
catalogsource.operators.coreos.com/cs-redhat-operator-index createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
CatalogSourceresources were successfully installed by running the following command:oc get catalogsource --all-namespaces
$ oc get catalogsource --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the catalog source is running by using the following command:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5s
NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
SubscriptionCR, similar to the following example:Example
my-subscription-cr.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
SubscriptionCR by running the following command:oc apply -f ./<my-subscription-cr.yaml>
$ oc apply -f ./<my-subscription-cr.yaml>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of your
SubscriptionCR, such asmy-subscription-cr.yaml.
Example output
subscription.operators.coreos.com/amq-broker created
subscription.operators.coreos.com/amq-broker createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow