Running applications


Red Hat build of MicroShift 4.14

Running applications in MicroShift

Red Hat OpenShift Documentation Team

Abstract

This document provides details about how to run applications in MicroShift.

Chapter 1. Using Kustomize manifests to deploy applications

You can use the kustomize configuration management tool with application manifests to deploy applications. Read through the following procedures for an example of how Kustomize works in MicroShift.

1.1. How Kustomize works with manifests to deploy applications

The kustomize configuration management tool is integrated with MicroShift. You can use Kustomize and the OpenShift CLI (oc) together to apply customizations to your application manifests and deploy those applications to a MicroShift cluster.

  • A kustomization.yaml file is a specification of resources plus customizations.
  • Kustomize uses a kustomization.yaml file to load a resource, such as an application, then applies any changes you want to that application manifest and produces a copy of the manifest with the changes overlaid.
  • Using a manifest copy with an overlay keeps the original configuration file for your application intact, while enabling you to deploy iterations and customizations of your applications efficiently.
  • You can then deploy the application in your MicroShift cluster with an oc command.

1.1.1. How MicroShift uses manifests

At every start, MicroShift searches the following manifest directories for Kustomize manifest files:

  • /etc/microshift/manifests
  • /etc/microshift/manifests.d/*
  • /usr/lib/microshift/
  • /usr/lib/microshift/manifests.d/*

MicroShift automatically runs the equivalent of the kubectl apply -k command to apply the manifests to the cluster if any of the following file types exists in the searched directories:

  • kustomization.yaml
  • kustomization.yml
  • Kustomization

This automatic loading from multiple directories means you can manage MicroShift workloads with the flexibility of having different workloads run independently of each other.

Table 1.1. MicroShift manifest directories
LocationIntent

/etc/microshift/manifests

Read-write location for configuration management systems or development.

/etc/microshift/manifests.d/*

Read-write location for configuration management systems or development.

/usr/lib/microshift/manifests

Read-only location for embedding configuration manifests on OSTree-based systems.

/usr/lib/microshift/manifestsd./*

Read-only location for embedding configuration manifests on OSTree-based systems.

1.2. Override the list of manifest paths

You can override the list of default manifest paths by using a new single path, or by using a new glob pattern for multiple files. Use the following procedure to customize your manifest paths.

Procedure

  1. Override the list of default paths by inserting your own values and running one of the following commands:

    1. Set manifests.kustomizePaths to <"/opt/alternate/path"> in the configuration file for a single path.
    2. Set kustomizePaths to ,"/opt/alternative/path.d/*". in the configuration file for a glob pattern.

      manifests:
          kustomizePaths:
              - <location> 1
      1
      Set each location entry to an exact path by using "/opt/alternate/path" or a glob pattern by using "/opt/alternative/path.d/*".
  2. To disable loading manifests, set the configuration option to an empty list.

    manifests:
        kustomizePaths: []
    Note

    The configuration file overrides the defaults entirely. If the kustomizePaths value is set, only the values in the configuration file are used. Setting the value to an empty list disables manifest loading.

1.3. Using manifests example

This example demonstrates automatic deployment of a BusyBox container using kustomize manifests in the /etc/microshift/manifests directory.

Procedure

  1. Create the BusyBox manifest files by running the following commands:

    1. Define the directory location:

      $ MANIFEST_DIR=/etc/microshift/manifests
    2. Make the directory:

      $ sudo mkdir -p ${MANIFEST_DIR}
    3. Place the YAML file in the directory:

      sudo tee ${MANIFEST_DIR}/busybox.yaml &>/dev/null <<EOF
      apiVersion: v1
      kind: Namespace
      metadata:
        name: busybox
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: busybox
        namespace: busybox-deployment
      spec:
        selector:
          matchLabels:
            app: busybox
        template:
          metadata:
            labels:
              app: busybox
          spec:
            containers:
            - name: busybox
              image: BUSYBOX_IMAGE
              command: [ "/bin/sh", "-c", "while true ; do date; sleep 3600; done;" ]
      EOF
  2. Next, create the kustomize manifest files by running the following commands:

    1. Place the YAML file in the directory:

      sudo tee ${MANIFEST_DIR}/kustomization.yaml &>/dev/null <<EOF
      apiVersion: kustomize.config.k8s.io/v1beta1
      kind: Kustomization
      namespace: busybox
      resources:
        - busybox.yaml
      images:
        - name: BUSYBOX_IMAGE
          newName: busybox:1.35
      EOF
  3. Restart MicroShift to apply the manifests by running the following command:

    $ sudo systemctl restart microshift
  4. Apply the manifests and start the busybox pod by running the following command:

    $ oc get pods -n busybox

Chapter 2. Options for embedding MicroShift applications in a RHEL for Edge image

You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image to run in a MicroShift cluster. Embedded applications can be installed directly on edge devices to run in air-gapped, disconnected, or offline environments.

2.1. Adding application RPMs to an rpm-ostree image

If you have an application that includes APIs, container images, and configuration files for deployment such as manifests, you can build application RPMs. You can then add the RPMs to your RHEL for Edge system image.

The following is an outline of the procedures to embed applications or workloads in an fully self-contained operating system image:

  • Build your own RPM that includes your application manifest.
  • Add the RPM to the blueprint you used to install Red Hat build of MicroShift.
  • Add the workload container images to the same blueprint.
  • Create a bootable ISO.

For a step-by-step tutorial about preparing and embedding applications in a RHEL for Edge image, use the following tutorial:

2.2. Adding application manifests to an image for offline use

If you have a simple application that includes a few files for deployment such as manifests, you can add those manifests directly to a RHEL for Edge system image.

See the "Create a custom file blueprint customization" section of the following RHEL for Edge documentation for an example:

2.3. Embedding applications for offline use

If you have an application that includes more than a few files, you can embed the application for offline use. See the following procedure:

2.4. Additional resources

Chapter 3. Embedding applications for offline use

You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. Embedding means you can run a Red Hat build of MicroShift cluster in air-gapped, disconnected, or offline environments.

3.1. Embedding workload container images for offline use

To embed container images in devices at the edge that do not have any network connection, you must create a new container, mount the ISO, and then copy the contents into the file system.

Prerequisites

  • You have root access to the host.
  • Application RPMs have been added to a blueprint.

Procedure

  1. Render the manifests, extract all of the container image references, and translate the application image to blueprint container sources by running the following command:

    $ oc kustomize ~/manifests | grep "image:" | grep -oE '[^ ]+$' | while read line; do echo -e "[[containers]]\nsource = \"${line}\"\n"; done >><my_blueprint>.toml
  2. Push the updated blueprint to Image Builder by running the following command:

    $ sudo composer-cli blueprints push <my_blueprint>.toml
  3. If your workload containers are located in a private repository, you must provide Image Builder with the necessary pull secrets:

    1. Set the auth_file_path in the [containers] section of the osbuilder worker configuration in the /etc/osbuild-worker/osbuild-worker.toml file to point to the pull secret.
    2. If needed, create a directory and file for the pull secret, for example:

      Example directory and file

      [containers]
      auth_file_path = "/<path>/pull-secret.json" 1

      1
      Use the custom location previously set for copying and retrieving images.
  4. Build the container image by running the following command:

    $ sudo composer-cli compose start-ostree <my_blueprint> edge-commit
  5. Proceed with your preferred rpm-ostree image flow, such as waiting for the build to complete, exporting the image and integrating it into your rpm-ostree repository or creating a bootable ISO.

3.2. Additional resources

Chapter 4. Embedding Red Hat build of MicroShift applications tutorial

The following tutorial gives a detailed example of how to embed applications in a RHEL for Edge image for use in a MicroShift cluster in various environments.

4.1. Embed application RPMs tutorial

The following tutorial reviews the MicroShift installation steps and adds a description of the workflow for embedding applications. If you are already familiar with rpm-ostree systems such as Red Hat Enterprise Linux for Edge (RHEL for Edge) and MicroShift, you can go straight to the procedures.

4.1.1. Installation workflow review

Embedding applications requires a similar workflow to embedding MicroShift into a RHEL for Edge image.

  • The following image shows how system artifacts such as RPMs, containers, and files are added to a blueprint and used by the image composer to create an ostree commit.
  • The ostree commit then can follow either the ISO path or the repository path to edge devices.
  • The ISO path can be used for disconnected environments, while the repository path is often used in places were the network is usually connected.

Embedding MicroShift workflow

468 RHbM install workflow 1023 1

Reviewing these steps can help you understand the steps needed to embed an application:

  1. To embed MicroShift on RHEL for Edge, you added the MicroShift repositories to Image Builder.
  2. You created a blueprint that declared all the RPMs, container images, files and customizations you needed, including the addition of MicroShift.
  3. You added the blueprint to Image Builder and ran a build with the Image Builder CLI tool (composer-cli). This step created rpm-ostree commits, which were used to create the container image. This image contained RHEL for Edge.
  4. You added the installer blueprint to Image Builder to create an rpm-ostree image (ISO) to boot from. This build contained both RHEL for Edge and MicroShift.
  5. You downloaded the ISO with MicroShift embedded, prepared it for use, provisioned it, then installed it onto your edge devices.

4.1.2. Embed application RPMs workflow

After you have set up a build host that meets the Image Builder requirements, you can add your application in the form of a directory of manifests to the image. After those steps, the simplest way to embed your application or workload into a new ISO is to create your own RPMs that include the manifests. Your application RPMs contain all of the configuration files describing your deployment.

The following "Embedding applications workflow" image shows how Kubernetes application manifests and RPM spec files are combined in a single application RPM build. This build becomes the RPM artifact included in the workflow for embedding MicroShift in an ostree commit.

Embedding applications workflow

468 RHbM install workflow 1023 2

The following procedures use the rpmbuild tool to create a specification file and local repository. The specification file defines how the package is built, moving your application manifests to the correct location inside the RPM package for MicroShift to pick them up. That RPM package is then embedded in the ISO.

4.1.3. Preparing to make application RPMs

To build your own RPMs, choose a tool of your choice, such as the rpmbuild tool, and initialize the RPM build tree in your home directory. The following is an example procedure. As long as your RPMs are accessible to Image Builder, you can use the method you prefer to build the application RPMs.

Prerequisites

  • You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.2 build host that meets the Image Builder system requirements.
  • You have root access to the host.

Procedure

  1. Install the rpmbuild tool and create the yum repository for it by running the following command:

    $ sudo dnf install rpmdevtools rpmlint yum-utils createrepo
  2. Create the file tree you need to build RPM packages by running the following command:

    $ rpmdev-setuptree

Verification

  • List the directories to confirm creation by running the following command:

    $ ls ~/rpmbuild/

    Example output

    BUILD RPMS SOURCES SPECS SRPMS

4.1.4. Building the RPM package for the application manifests

To build your own RPMs, you must create a spec file that adds the application manifests to the RPM package. The following is an example procedure. As long as the application RPMs and other elements needed for image building are accessible to Image Builder, you can use the method that you prefer.

Prerequisites

  • You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.2 build host that meets the Image Builder system requirements.
  • You have root access to the host.
  • The file tree required to build RPM packages was created.

Procedure

  1. In the ~/rpmbuild/SPECS directory, create a file such as <application_workload_manifests.spec> using the following template:

    Example spec file

    Name: <application_workload_manifests>
    Version: 0.0.1
    Release: 1%{?dist}
    Summary: Adds workload manifests to microshift
    BuildArch: noarch
    License: GPL
    Source0: %{name}-%{version}.tar.gz
    #Requires: microshift
    %description
    Adds workload manifests to microshift
    %prep
    %autosetup
    %install 1
    rm -rf $RPM_BUILD_ROOT
    mkdir -p $RPM_BUILD_ROOT/%{_prefix}/lib/microshift/manifests
    cp -pr ~/manifests $RPM_BUILD_ROOT/%{_prefix}/lib/microshift/
    %clean
    rm -rf $RPM_BUILD_ROOT
    
    %files
    %{_prefix}/lib/microshift/manifests/**
    %changelog
    * <DDD MM DD YYYY username@domain - V major.minor.patch>
    - <your_change_log_comment>

    1
    The %install section creates the target directory inside the RPM package, /usr/lib/microshift/manifests/ and copies the manifests from the source home directory, ~/manifests.
    Important

    All of the required YAML files must be in the source home directory ~/manifests, including a kustomize.yaml file if you are using kustomize.

  2. Build your RPM package in the ~/rpmbuild/RPMS directory by running the following command:

    $ rpmbuild -bb ~/rpmbuild/SPECS/<application_workload_manifests.spec>

4.1.5. Adding application RPMs to a blueprint

To add application RPMs to a blueprint, you must create a local repository that Image Builder can use to create the ISO. With this procedure, the required container images for your workload can be pulled over the network.

Prerequisites

  • You have root access to the host.
  • Workload or application RPMs exist in the ~/rpmbuild/RPMS directory.

Procedure

  1. Create a local RPM repository by running the following command:

    $ createrepo ~/rpmbuild/RPMS/
  2. Give Image Builder access to the RPM repository by running the following command:

    $ sudo chmod a+rx ~
    Note

    You must ensure that Image Builder has all of the necessary permissions to access all of the files needed for image building, or the build cannot proceed.

  3. Create the blueprint file, repo-local-rpmbuild.toml using the following template:

    id = "local-rpm-build"
    name = "RPMs build locally"
    type = "yum-baseurl"
    url = "file://<path>/rpmbuild/RPMS" 1
    check_gpg = false
    check_ssl = false
    system = false
    1
    Specify part of the path to create a location that you choose. Use this path in the later commands to set up the repository and copy the RPMs.
  4. Add the repository as a source for Image Builder by running the following command:

    $ sudo composer-cli sources add repo-local-rpmbuild.toml
  5. Add the RPM to your blueprint, by adding the following lines:

    …
    [[packages]]
    name = "<application_workload_manifests>" 1
    version = "*"
    …
    1
    Add the name of your workload here.
  6. Push the updated blueprint to Image Builder by running the following command:

    $ sudo composer-cli blueprints push repo-local-rpmbuild.toml
  7. At this point, you can either run Image Builder to create the ISO, or embed the container images for offline use.

    1. To create the ISO, start Image Builder by running the following command:

      $ sudo composer-cli compose start-ostree repo-local-rpmbuild edge-commit

In this scenario, the container images are pulled over the network by the edge device during startup.

4.2. Additional resources

Chapter 5. Greenboot workload health check scripts

Greenboot health check scripts are helpful on edge devices where direct serviceability is either limited or non-existent. You can create health check scripts to assess the health of your workloads and applications. These additional health check scripts are useful components of software problem checks and automatic system rollbacks.

A MicroShift health check script is included in the microshift-greenboot RPM. You can also create your own health check scripts based on the workloads you are running. For example, you can write one that verifies that a service has started.

5.1. How workload health check scripts work

The workload or application health check script described in this tutorial uses the MicroShift health check functions that are available in the /usr/share/microshift/functions/greenboot.sh file. This enables you to reuse procedures already implemented for the MicroShift core services.

The script starts by running checks that the basic functions of the workload are operating as expected. To run the script successfully:

  • Execute the script from a root user account.
  • Enable the MicroShift service.

The health check performs the following actions:

  • Gets a wait timeout of the current boot cycle for the wait_for function.
  • Calls the namespace_images_downloaded function to wait until pod images are available.
  • Calls the namespace_pods_ready function to wait until pods are ready.
  • Calls the namespace_pods_not_restarting function to verify pods are not restarting.
Note

Restarting pods can indicate a crash loop.

5.2. Included greenboot health checks

Health check scripts are available in /usr/lib/greenboot/check, a read-only directory in RPM-OSTree systems. The following health checks are included with the greenboot-default-health-checks framework.

  • Check if repository URLs are still DNS solvable:

    This script is under /usr/lib/greenboot/check/required.d/01_repository_dns_check.sh and ensures that DNS queries to repository URLs are still available.

  • Check if update platforms are still reachable:

    This script is under /usr/lib/greenboot/check/wanted.d/01_update_platform_check.sh and tries to connect and get a 2XX or 3XX HTTP code from the update platforms defined in /etc/ostree/remotes.d.

  • Check if the current boot has been triggered by the hardware watchdog:

    This script is under /usr/lib/greenboot/check/required.d/02_watchdog.sh and checks whether the current boot has been watchdog-triggered or not.

    • If the watchdog-triggered reboot occurs within the grace period, the current boot is marked as red. Greenboot does not trigger a rollback to the previous deployment.
    • If the watchdog-triggered reboot occurs after the grace period, the current boot is not marked as red. Greenboot does not trigger a rollback to the previous deployment.
    • A 24-hour grace period is enabled by default. This grace period can be either disabled by modifying GREENBOOT_WATCHDOG_CHECK_ENABLED in /etc/greenboot/greenboot.conf to false, or configured by changing the GREENBOOT_WATCHDOG_GRACE_PERIOD=number_of_hours variable value in /etc/greenboot/greenboot.conf.

5.3. How to create a health check script for your application

You can create workload or application health check scripts in the text editor of your choice using the example in this documentation. Save the scripts in the /etc/greenboot/check/required.d directory. When a script in the /etc/greenboot/check/required.d directory exits with an error, greenboot triggers a reboot in an attempt to heal the system.

Note

Any script in the /etc/greenboot/check/required.d directory triggers a reboot if it exits with an error.

If your health check logic requires any post-check steps, you can also create additional scripts and save them in the relevant greenboot directories. For example:

  • You can also place shell scripts you want to run after a boot has been declared successful in /etc/greenboot/green.d.
  • You can place shell scripts you want to run after a boot has been declared failed in /etc/greenboot/red.d. For example, if you have steps to heal the system before restarting, you can create scripts for your use case and place them in the /etc/greenboot/red.d directory.

5.3.1. About the workload health check script example

The following example uses the MicroShift health check script as a template. You can use this example with the provided libraries as a guide for creating basic health check scripts for your applications.

5.3.1.1. Basic prerequisites for creating a health check script
  • The workload must be installed.
  • You must have root access.
5.3.1.2. Example and functional requirements

You can start with the following example health check script. Modify it for your use case. In your workload health check script, you must complete the following minimum steps:

  • Set the environment variables.
  • Define the user workload namespaces.
  • List the expected pod count.
Important

Choose a name prefix for your application that ensures it runs after the 40_microshift_running_check.sh script, which implements the Red Hat build of MicroShift health check procedure for its core services.

Example workload health check script

# #!/bin/bash
set -e

SCRIPT_NAME=$(basename $0)
PODS_NS_LIST=(<user_workload_namespace1> <user_workload_namespace2>)
PODS_CT_LIST=(<user_workload_namespace1_pod_count> <user_workload_namespace2_pod_count>)
# Update these two lines with at least one namespace and the pod counts that are specific to your workloads. Use the kubernetes <namespace> where your workload is deployed.

# Set greenboot to read and execute the workload health check functions library.
source /usr/share/microshift/functions/greenboot.sh

# Set the exit handler to log the exit status.
trap 'script_exit' EXIT

# Set the script exit handler to log a `FAILURE` or `FINISHED` message depending on the exit status of the last command.
# args: None
# return: None
function script_exit() {
    [ "$?" -ne 0 ] && status=FAILURE || status=FINISHED
    echo $status
}

# Set the system to automatically stop the script if the user running it is not 'root'.
if [ $(id -u) -ne 0 ] ; then
    echo "The '${SCRIPT_NAME}' script must be run with the 'root' user privileges"
    exit 1
fi

echo "STARTED"

# Set the script to stop without reporting an error if the MicroShift service is not running.
if [ $(systemctl is-enabled microshift.service 2>/dev/null) != "enabled" ] ; then
    echo "MicroShift service is not enabled. Exiting..."
    exit 0
fi

# Set the wait timeout for the current check based on the boot counter.
WAIT_TIMEOUT_SECS=$(get_wait_timeout)

# Set the script to wait for the pod images to be downloaded.
for i in ${!PODS_NS_LIST[@]}; do
    CHECK_PODS_NS=${PODS_NS_LIST[$i]}

    echo "Waiting ${WAIT_TIMEOUT_SECS}s for pod image(s) from the ${CHECK_PODS_NS} namespace to be downloaded"
    wait_for ${WAIT_TIMEOUT_SECS} namespace_images_downloaded
done

# Set the script to wait for pods to enter ready state.
for i in ${!PODS_NS_LIST[@]}; do
    CHECK_PODS_NS=${PODS_NS_LIST[$i]}
    CHECK_PODS_CT=${PODS_CT_LIST[$i]}

    echo "Waiting ${WAIT_TIMEOUT_SECS}s for ${CHECK_PODS_CT} pod(s) from the ${CHECK_PODS_NS} namespace to be in 'Ready' state"
    wait_for ${WAIT_TIMEOUT_SECS} namespace_pods_ready
done

# Verify that pods are not restarting by running, which could indicate a crash loop.
for i in ${!PODS_NS_LIST[@]}; do
    CHECK_PODS_NS=${PODS_NS_LIST[$i]}

    echo "Checking pod restart count in the ${CHECK_PODS_NS} namespace"
    namespace_pods_not_restarting ${CHECK_PODS_NS}
done

5.4. Testing a workload health check script

Prerequisites

  • You have root access.
  • You have installed a workload.
  • You have created a health check script for the workload.
  • The Red Hat build of MicroShift service is enabled.

Procedure

  1. To test that greenboot is running a health check script file, reboot the host by running the following command:

    $ sudo reboot
  2. Examine the output of greenboot health checks by running the following command:

    $ sudo journalctl -o cat -u greenboot-healthcheck.service
    Note

    MicroShift core service health checks run before the workload health checks.

    Example output

    GRUB boot variables:
    boot_success=0
    boot_indeterminate=0
    Greenboot variables:
    GREENBOOT_WATCHDOG_CHECK_ENABLED=true
    ...
    ...
    FINISHED
    Script '40_microshift_running_check.sh' SUCCESS
    Running Wanted Health Check Scripts...
    Finished greenboot Health Checks Runner.

5.5. Additional resources

Chapter 6. Pod security authentication and authorization

6.1. Understanding and managing pod security admission

Pod security admission is an implementation of the Kubernetes pod security standards. Use pod security admission to restrict the behavior of pods.

6.2. Security context constraint synchronization with pod security standards

MicroShift includes Kubernetes pod security admission.

In addition to the global pod security admission control configuration, a controller exists that applies pod security admission control warn and audit labels to namespaces according to the security context constraint (SCC) permissions of the service accounts that are in a given namespace.

Important

Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. You can enable pod security admission synchronization on other namespaces as necessary. If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a cluster service version (CSV) is created in the namespace.

The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile found in the namespace to prevent warnings and audit logging as pods are created.

Namespace labeling is based on consideration of namespace-local service account privileges.

Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.

6.2.1. Viewing security context constraints in a namespace

You can view the security context constraints (SCC) permissions in a given namespace.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  • To view the security context constraints in your namespace, run the following command:

    oc get --show-labels namespace <namespace>

6.3. Controlling pod security admission synchronization

You can enable automatic pod security admission synchronization for most namespaces.

System defaults are not enforced when the security.openshift.io/scc.podSecurityLabelSync field is empty or set to false. You must set the label to true for synchronization to occur.

Important

Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. These namespaces include:

  • default
  • kube-node-lease
  • kube-system
  • kube-public
  • openshift
  • All system-created namespaces that are prefixed with openshift-, except for openshift-operators By default, all namespaces that have an openshift- prefix are not synchronized. You can enable synchronization for any user-created openshift-* namespaces. You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators.

If an Operator is installed in a user-created openshift-* namespace, synchronization is turned on by default after a cluster service version (CSV) is created in the namespace. The synchronized label inherits the permissions of the service accounts in the namespace.

Procedure

  • To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true.

    Run the following command:

    $ oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true
Note

You can use the --overwrite flag to reverse the effects of the pod security label synchronization in a namespace.

Chapter 7. How Operators work with MicroShift

You can use Operators with MicroShift to create applications that monitor the running services in your cluster. Operators can manage applications and their resources, such as deploying a database or message bus. As customized software running inside your cluster, Operators can be used to implement and automate common operations.

Operators offer a more localized configuration experience and integrate with Kubernetes APIs and CLI tools such as kubectl and oc. Operators are designed specifically for your applications. Operators enable you to configure components instead of modifying a global configuration file.

MicroShift applications are generally expected to be deployed in static environments. However, Operators are available if helpful in your use case. To determine an Operator’s compatibility with MicroShift, check the Operator’s documentation.

7.1. How to install Operators in MicroShift

To minimize the footprint of MicroShift, Operators are installed directly with manifests instead of using the Operator Lifecycle Manager (OLM). You can use the kustomize configuration management tool with MicroShift to deploy an application. Use the same steps to install Operators with manifests. Read Using Kustomize manifests to deploy applications for more information about manifests.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.