Chapter 7. OLM 1.0 (Technology Preview)
7.1. About Operator Lifecycle Manager 1.0 (Technology Preview)
Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features:
- Fully declarative model that supports GitOps workflows
OLM 1.0 simplifies Operator management through two key APIs:
-
A new
Operator
API, provided asoperator.operators.operatorframework.io
by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles. -
The
Catalog
API, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
For more information, see Operator Controller and Catalogd.
-
A new
- Improved control over Operator updates
- With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator.
- Flexible Operator packaging format
Administrators can use file-based catalogs to install and manage the following types of content:
- OLM-based Operators, similar to the existing OLM experience
- Plain bundles, which are static collections of arbitrary Kubernetes manifests
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles.
7.1.1. Purpose
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.
The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition
(CRD) objects, to provide additional functionality to the cluster.
After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.
7.2. Components and architecture
7.2.1. OLM 1.0 components overview (Technology Preview)
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects:
- Operator Controller
- Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
- RukPak
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
- Catalogd
- Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.
7.2.2. Operator Controller (Technology Preview)
Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.2.1. Operator API
Operator Controller provides a new Operator
API object, which is a single resource that represents an instance of an installed Operator. This operator.operators.operatorframework.io
API streamlines management of installed Operators by consolidating user-facing APIs into a single object.
In OLM 1.0, Operator
objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription
and OperatorGroup
objects.
For more information about the earlier behavior, see Multitenancy and Operator colocation.
Example Operator
object
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> spec: packageName: <package_name> channel: <channel_name> version: <version_number>
When using the OpenShift CLI (oc
), the Operator
resource provided with OLM 1.0 during this Technology Preview phase requires specifying the full <resource>.<group>
format: operator.operators.operatorframework.io
. For example:
$ oc get operator.operators.operatorframework.io
If you specify only the Operator
resource without the API group, the CLI returns results for an earlier API (operator.operators.coreos.com
) that is unrelated to OLM 1.0.
Additional resources
7.2.2.1.1. Example custom resources (CRs) that specify a target version
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR).
You can define a target version by specifying any of the following fields:
- Channel
- Version number
- Version range
If you specify a channel in the CR, OLM 1.0 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release that can be resolved from the channel.
Example CR with a specified channel
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
channel: latest 1
- 1
- Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed.
If you specify the Operator or extension’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator’s CR. Specifying an Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
version: 1.11.1 1
- 1
- Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version.
If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM 1.0 installs the latest version of an Operator or extension that can be resolved by the Operator Controller.
Example CR with a version range specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
version: >1.11.1 1
- 1
- Specifies that the desired version range is greater than version
1.11.1
. For more information, see "Support for version ranges".
After you create or update a CR, apply the configuration file by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
7.2.3. Rukpak (Technology Preview)
Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.3.1. About RukPak
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
Common terminology
- Bundle
- A collection of Kubernetes manifests that define content to be deployed to a cluster
- Bundle image
- A container image that contains a bundle within its filesystem
- Bundle Git repository
- A Git repository that contains a bundle within a directory
- Provisioner
- Controllers that install and manage content on a Kubernetes cluster
- Bundle deployment
- Generates deployed instances of a bundle
7.2.3.2. About provisioners
RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle
and BundleDeployment
. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0
bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1
bundles.
Each provisioner is assigned a unique ID and is responsible for reconciling Bundle
and BundleDeployment
objects with a spec.provisionerClassName
field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0
bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
A provisioner places a watch on both Bundle
and BundleDeployment
resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle
resource onto the cluster. Then, given a BundleDeployment
resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.
7.2.3.3. Bundle
A RukPak Bundle
object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle
objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content.
Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz
file in a directory mounted into the provisioner pods. Each Bundle
object has an associated spec.provisionerClassName
field that indicates the Provisioner
object that watches and unpacks that particular bundle type.
Example Bundle
object configured to work with the plain provisioner
apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain
Bundles are considered immutable after they are created.
7.2.3.3.1. Bundle immutability
After a Bundle
object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate
object.
Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle
object events and, for any update to a bundle, checks whether the spec
field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle
object fields, such as metadata
or status
, are updated during the bundle’s lifecycle; it is only the spec
field that is considered immutable.
Applying a Bundle
object and then attempting to update its spec should fail. For example, the following example creates a bundle:
$ oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF
Example output
bundle.core.rukpak.io/combo-tag-ref created
Then, patching the bundle to point to a newer tag returns an error:
$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
Example output
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle
object instead of updating it in-place.
Further immutability considerations
While the spec
field of the Bundle
object is immutable, it is still possible for a BundleDeployment
object to pivot to a newer version of bundle content without changing the underlying spec
field. This unintentional pivoting could occur in the following scenario:
-
A user sets an image tag, a Git branch, or a Git tag in the
spec.source
field of theBundle
object. - The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
- A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.
If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.
This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.
To be confident that the underlying Bundle
spec content does not change, use a digest-based image or a Git commit reference when creating the bundle.
7.2.3.3.2. Plain bundle spec
A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.
The currently implemented plain bundle format is the plain+v0
format. The name of the bundle format, plain+v0
, combines the type of bundle (plain
) with the current schema version (v0
).
The plain+v0
bundle format is at schema version v0
, which means it is an experimental format that is subject to change.
For example, the following shows the file tree in a plain+v0
bundle. It must have a manifests/
directory containing the Kubernetes resources required to deploy an application.
Example plain+v0
bundle file tree
$ tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml
The static manifests must be located in the manifests/
directory with at least one resource in it for the bundle to be a valid plain+v0
bundle that the provisioner can unpack. The manifests/
directory must also be flat; all manifests must be at the top-level with no subdirectories.
Do not include any content in the manifests/
directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply
command will result in an error. Multi-object YAML or JSON files are valid, as well.
7.2.3.3.3. Registry bundle spec
A registry bundle, or registry+v1
bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format.
Additional resources
7.2.3.4. BundleDeployment
A BundleDeployment
object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment
API to only those who require those permissions.
The RukPak BundleDeployment
API points to a Bundle
object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment
object might also include an embedded spec for a desired bundle.
Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.
The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.
Example BundleDeployment
object configured to work with the plain provisioner
apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain
7.2.4. Dependency resolution in OLM 1.0 (Technology Preview)
Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.4.1. Concepts
There are a set of expectations from the user that the package manager should never do the following:
- Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
- Install a package whose constraints can not be met by the current set of installable packages
- Update a package in a way that breaks another that depends on it
7.2.4.1.1. Example: Successful resolution
A user wants to install packages A and B that have the following dependencies:
Package A |
Package B |
↓ (depends on) | ↓ (depends on) |
Package C |
Package D |
Additionally, the user wants to pin the version of A to v0.1.0
.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0
depends on Cv0.1.0
-
A pinned to
v0.1.0
- B depends on D
Output
Resolution set:
-
A
v0.1.0
-
B
latest
-
C
v0.1.0
-
D
latest
-
A
7.2.4.1.2. Example: Unsuccessful resolution
A user wants to install packages A and B that have the following dependencies:
Package A |
Package B |
↓ (depends on) | ↓ (depends on) |
Package C |
Package C |
Additionally, the user wants to pin the version of A to v0.1.0
.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0
depends on Cv0.1.0
-
A pinned to
v0.1.0
-
B
latest
depends on Cv0.2.0
Output
Resolution set:
-
Unable to resolve because A
v0.1.0
requires Cv0.1.0
, which conflicts with Blatest
requiring Cv0.2.0
-
Unable to resolve because A
7.2.5. Catalogd (Technology Preview)
Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.5.1. About catalogs in OLM 1.0
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons:
- If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension.
- Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages.
Additional resources
7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io
, you must have a pull secret scoped to the openshift-catalogd
namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry".
Example Red Hat Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: redhat-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/redhat-operator-index:v4.15
pullSecret: <pull_secret_name>
pollInterval: <poll_interval_duration> 1
- 1
- Specify the interval for polling the remote registry for newer image digests. The default value is
24h
. Valid units include seconds (s
), minutes (m
), and hours (h
). To disable polling, set a zero value, such as0s
.
Example Certified Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h
Example Community Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h
The following command adds a catalog to your cluster:
Command syntax
$ oc apply -f <catalog_name>.yaml 1
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml
.
Additional resources
7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview)
Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.3.1. Prerequisites
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissionsNoteFor OpenShift Container Platform 4.15, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgrade
feature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc
) installed on your workstation
Additional resources
7.3.2. About catalogs in OLM 1.0
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons:
- If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension.
- Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages.
Additional resources
7.3.3. Red Hat-provided Operator catalogs in OLM 1.0
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io
, you must have a pull secret scoped to the openshift-catalogd
namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry".
Example Red Hat Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: redhat-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/redhat-operator-index:v4.15
pullSecret: <pull_secret_name>
pollInterval: <poll_interval_duration> 1
- 1
- Specify the interval for polling the remote registry for newer image digests. The default value is
24h
. Valid units include seconds (s
), minutes (m
), and hours (h
). To disable polling, set a zero value, such as0s
.
Example Certified Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h
Example Community Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h
The following command adds a catalog to your cluster:
Command syntax
$ oc apply -f <catalog_name>.yaml 1
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml
.
7.3.4. Creating a pull secret for catalogs hosted on a secure registry
If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io
, you must have a pull secret scoped to the openshift-catalogd
namespace.
Currently, catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed.
Prerequisites
- Login credentials for the secure registry
- Docker or Podman installed on your workstation
Procedure
If you already have a
.dockercfg
file with login credentials for the secure registry, create a pull secret by running the following command:$ oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<file_path>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd
Example 7.1. Example command
$ oc create secret generic redhat-cred \ --from-file=.dockercfg=/home/<username>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd
If you already have a
$HOME/.docker/config.json
file with login credentials for the secured registry, create a pull secret by running the following command:$ oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<file_path>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd
Example 7.2. Example command
$ oc create secret generic redhat-cred \ --from-file=.dockerconfigjson=/home/<username>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd
If you do not have a Docker configuration file with login credentials for the secure registry, create a pull secret by running the following command:
$ oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<username> \ --docker-password=<password> \ --docker-email=<email> \ --namespace=openshift-catalogd
Example 7.3. Example command
$ oc create secret docker-registry redhat-cred \ --docker-server=registry.redhat.io \ --docker-username=username \ --docker-password=password \ --docker-email=user@example.com \ --namespace=openshift-catalogd
7.3.5. Adding a catalog to a cluster
To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.
Prerequisites
If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from
registry.redhat.io
, you must have a pull secret scoped to theopenshift-catalogd
namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry".
Procedure
Create a catalog custom resource (CR), similar to the following example:
Example
redhat-operators.yaml
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3
- 1
- Specify the catalog’s image in the
spec.source.image
field. - 2
- If your catalog is hosted on a secure registry, such as
registry.redhat.io
, you must create a pull secret scoped to theopenshift-catalog
namespace. - 3
- Specify the interval for polling the remote registry for newer image digests. The default value is
24h
. Valid units include seconds (s
), minutes (m
), and hours (h
). To disable polling, set a zero value, such as0s
.
Add the catalog to your cluster by running the following command:
$ oc apply -f redhat-operators.yaml
Example output
catalog.catalogd.operatorframework.io/redhat-operators created
Verification
Run the following commands to verify the status of your catalog:
Check if you catalog is available by running the following command:
$ oc get catalog
Example output
NAME AGE redhat-operators 20s
Check the status of your catalog by running the following command:
$ oc describe catalog
Example output
Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: Catalog Metadata: Creation Timestamp: 2024-01-10T16:18:38Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 57057 UID: 128db204-49b3-45ee-bfea-a2e6fc8e34ea Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Type: image Status: 1 Conditions: Last Transition Time: 2024-01-10T16:18:55Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-01-10T16:18:51Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:7b536ae19b8e9f74bb521c4a61e5818e036ac1865a932f2157c6c9a766b2eea5 4 Type: image Events: <none>
Additional resources
7.3.6. Finding Operators to install from a catalog
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install. Before you can query catalogs, you must port forward the catalog server service.
Prerequisite
- You have added a catalog to your cluster.
-
You have installed the
jq
CLI tool.
Procedure
Port forward the catalog server service in the
openshift-catalogd
namespace by running the following command:$ oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80
Download the catalog’s JSON file locally by running the following command:
$ curl -L http://localhost:8080/catalogs/<catalog_name>/all.json \ -C - -o /<path>/<catalog_name>.json
Example 7.4. Example command
$ curl -L http://localhost:8080/catalogs/redhat-operators/all.json \ -C - -o /home/username/catalogs/rhoc.json
Run one of the following commands to return a list of Operators and extensions in a catalog.
ImportantCurrently, Operator Lifecycle Manager (OLM) 1.0 supports extensions that do not use webhooks and are configured to use the
AllNamespaces
install mode. Extensions that use webhooks or that target a single or specified set of namespaces cannot be installed.Get a list of all the Operators and extensions from the local catalog file by running the following command:
$ jq -s '.[] | select(.schema == "olm.package") | .name' \ /<path>/<filename>.json
Example 7.5. Example command
$ jq -s '.[] | select(.schema == "olm.package") | .name' \ /home/username/catalogs/rhoc.json
Example 7.6. Example output
NAME AGE "3scale-operator" "advanced-cluster-management" "amq-broker-rhel8" "amq-online" "amq-streams" "amq7-interconnect-operator" "ansible-automation-platform-operator" "ansible-cloud-addons-operator" "apicast-operator" "aws-efs-csi-driver-operator" "aws-load-balancer-operator" "bamoe-businessautomation-operator" "bamoe-kogito-operator" "bare-metal-event-relay" "businessautomation-operator" ...
Get list of packages that support
AllNamespaces
install mode and do not use webhooks from the local catalog file by running the following command:$ jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ /<path>/<catalog_name>.json
Example 7.7. Example output
{"package":"3scale-operator","version":"0.10.0-mas"} {"package":"3scale-operator","version":"0.10.5"} {"package":"3scale-operator","version":"0.11.0-mas"} {"package":"3scale-operator","version":"0.11.1-mas"} {"package":"3scale-operator","version":"0.11.2-mas"} {"package":"3scale-operator","version":"0.11.3-mas"} {"package":"3scale-operator","version":"0.11.5-mas"} {"package":"3scale-operator","version":"0.11.6-mas"} {"package":"3scale-operator","version":"0.11.7-mas"} {"package":"3scale-operator","version":"0.11.8-mas"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-3"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-4"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-2"} ...
Inspect the contents of an Operator or extension’s metadata by running the following command:
$ jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' /<path>/<catalog_name>.json
Example 7.8. Example command
$ jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "openshift-pipelines-operator-rh")' \ /home/username/rhoc.json
Example 7.9. Example output
{ "defaultChannel": "stable", "icon": { "base64data": "PHN2ZyB4bWxu..." "mediatype": "image/png" }, "name": "openshift-pipelines-operator-rh", "schema": "olm.package" }
7.3.6.1. Common catalog queries
You can query catalogs by using the jq
CLI tool.
Query | Request |
---|---|
Available packages in a catalog |
$ jq -s '.[] | select( .schema == "olm.package") | \ .name' <catalog_name>.json |
Packages that support |
$ jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | \ @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ <catalog_name>.json |
Package metadata |
$ jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' <catalog_name>.json |
Catalog blobs in a package |
$ jq -s '.[] | select( .package == "<package_name>")' \ <catalog_name>.json |
Query | Request |
---|---|
Channels in a package |
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json |
Versions in a channel |
$ jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | \ .entries | .[] | .name' <catalog_name>.json |
|
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select ( .name == "<channel>") | \ select( .package == "<package_name>")' \ <catalog_name>.json |
Query | Request |
---|---|
Bundles in a package |
$ jq -s '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json |
|
$ jq -s '.[] | select( .schema == "olm.bundle" ) | \ select ( .name == "<bundle_name>") | \ select( .package == "<package_name>")' \ <catalog_name>.json |
7.3.7. Installing an Operator from a catalog
Operator Lifecycle Manager (OLM) 1.0 supports installing Operators and extensions scoped to the cluster. You can install an Operator or extension from a catalog by creating a custom resource (CR) and applying it to the cluster.
Currently, OLM 1.0 supports the installation of Operators and extensions that meet the following criteria:
-
The Operator or extension must use the
AllNamespaces
install mode. - The Operator or extension must not use webhooks.
Operators or extensions that use webhooks or that target a single or specified set of namespaces cannot be installed.
Prerequisite
- You have added a catalog to your cluster.
- You have downloaded a local copy of the catalog file.
-
You have installed the
jq
CLI tool.
Procedure
Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps:
Get a list of channels from a selected package by running the following command:
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json
Example 7.10. Example command
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json
Example 7.11. Example output
"latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13"
Get a list of the versions published in a channel by running the following command:
$ jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json
Example 7.12. Example command
$ jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json
Example 7.13. Example output
"openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.13.1"
Create a CR, similar to the following example:
Example
pipelines-operator.yaml
CRapiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: <channel> version: "<version>"
where:
- <channel>
-
Optional: Specifies the channel, such as
pipelines-1.11
orlatest
, for the package you want to install or update. - <version>
Optional: Specifies the version or version range, such as
1.11.1
,1.12.x
, or>=1.12.1
, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges".ImportantIf you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons:
- If mulitple catalogs are installed on a cluster, OLM 1.0 does not include a mechanism to specify a catalog when you install an Operator or extension.
- Dependency resolution in Operator Lifecycle Manager (OLM) 1.0 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages.
Apply the CR to the cluster by running the following command:
$ oc apply -f pipeline-operator.yaml
Example output
operator.operators.operatorframework.io/pipelines-operator created
Verification
View the Operator or extension’s CR in the YAML format by running the following command:
$ oc get operator.operators.operatorframework.io pipelines-operator -o yaml
NoteIf you specify a channel or define a version range in your Operator or extension’s CR, OLM 1.0 does not display the resolved version installed on the cluster. Only the version and channel information specified in the CR are displayed.
If you want to find the specific version that is installed, you must compare the SHA of the image of the
spec.source.image.ref
field to the image reference in the catalog.Example 7.14. Example output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.11.x"}} creationTimestamp: "2024-01-30T20:06:09Z" generation: 1 name: pipelines-operator resourceVersion: "44362" uid: 4272d228-22e1-419e-b9a7-986f982ee588 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.x status: conditions: - lastTransitionTime: "2024-01-30T20:06:15Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Resolved - lastTransitionTime: "2024-01-30T20:06:31Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Installed installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280
Get information about your bundle deployment by running the following command:
$ oc get bundleDeployment pipelines-operator -o yaml
Example 7.15. Example output
apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: creationTimestamp: "2024-01-30T20:06:15Z" generation: 2 name: pipelines-operator ownerReferences: - apiVersion: operators.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Operator name: pipelines-operator uid: 4272d228-22e1-419e-b9a7-986f982ee588 resourceVersion: "44464" uid: 0a0c3525-27e2-4c93-bf57-55920a7707c0 spec: provisionerClassName: core-rukpak-io-plain template: metadata: {} spec: provisionerClassName: core-rukpak-io-registry source: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 type: image status: activeBundle: pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw conditions: - lastTransitionTime: "2024-01-30T20:06:15Z" message: Successfully unpacked the pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw Bundle reason: UnpackSuccessful status: "True" type: HasValidBundle - lastTransitionTime: "2024-01-30T20:06:28Z" message: Instantiated bundle pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw successfully reason: InstallationSucceeded status: "True" type: Installed - lastTransitionTime: "2024-01-30T20:06:40Z" message: BundleDeployment is healthy reason: Healthy status: "True" type: Healthy observedGeneration: 2
Additional resources
7.3.8. Updating an Operator
You can update your Operator or extension by manually editing the custom resource (CR) and applying the changes.
Prerequisites
- You have a catalog installed.
- You have downloaded a local copy of the catalog file.
- You have an Operator or extension installed.
-
You have installed the
jq
CLI tool.
Procedure
Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps:
Get a list of channels from a selected package by running the following command:
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json
Example 7.16. Example command
$ jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json
Example 7.17. Example output
"latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13"
Get a list of the versions published in a channel by running the following command:
$ jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json
Example 7.18. Example command
$ jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json
Example 7.19. Example output
"openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.13.1"
Find out what version or channel is specified in your Operator or extension’s CR by running the following command:
$ oc get operator.operators.operatorframework.io <operator_name> -o yaml
Example command
$ oc get operator.operators.operatorframework.io pipelines-operator -o yaml
Example 7.20. Example output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.11.1"}} creationTimestamp: "2024-02-06T17:47:15Z" generation: 2 name: pipelines-operator resourceVersion: "84528" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest 1 packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.1 2 status: conditions: - lastTransitionTime: "2024-02-06T17:47:21Z" message: bundledeployment status is unknown observedGeneration: 2 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: "2024-02-06T17:50:58Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 2 reason: Success status: "True" type: Resolved resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280
NoteIf you specify a channel or define a version range in your Operator or extension’s CR, OLM 1.0 does not display the resolved version installed on the cluster. Only the version and channel information specified in the CR are displayed.
If you want to find the specific version that is installed, you must compare the SHA of the image of the
spec.source.image.ref
field to the image reference in the catalog.Edit your CR by using one of the following methods:
If you want to pin your Operator or extension to specific version, such as
1.12.1
, edit your CR similar to the following example:Example
pipelines-operator.yaml
CRapiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.12.1 1
- 1
- Update the version from
1.11.1
to1.12.1
If you want to define a range of acceptable update versions, edit your CR similar to the following example:
Example CR with a version range specified
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: ">1.11.1, <1.13" 1
- 1
- Specifies that the desired version range is greater than version
1.11.1
and less than1.13
. For more information, see "Support for version ranges" and "Version comparison strings".
If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example:
Example CR with a specified channel
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: pipelines-1.13 1
- 1
- Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed.
If you want to specify a channel and version or version range, edit your CR similar to the following example:
Example CR with a specified channel and version range
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest version: "<1.13"
For more information, see "Example custom resources (CRs) that specify a target version".
Apply the update to the cluster by running the following command:
$ oc apply -f pipelines-operator.yaml
Example output
operator.operators.operatorframework.io/pipelines-operator configured
TipYou can patch and apply the changes to your CR from the CLI by running the following command:
$ oc patch operator.operators.operatorframework.io/pipelines-operator -p \ '{"spec":{"version":"1.12.1"}}' \ --type=merge
Example output
operator.operators.operatorframework.io/pipelines-operator patched
Verification
Verify that the channel and version updates have been applied by running the following command:
$ oc get operator.operators.operatorframework.io pipelines-operator -o yaml
Example 7.21. Example output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"1.12.1"}} creationTimestamp: "2024-02-06T19:16:12Z" generation: 4 name: pipelines-operator resourceVersion: "58122" uid: 886bbf73-604f-4484-9f87-af6ce0f86914 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.12.1 1 status: conditions: - lastTransitionTime: "2024-02-06T19:30:57Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a" observedGeneration: 3 reason: Success status: "True" type: Installed - lastTransitionTime: "2024-02-06T19:30:57Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a" observedGeneration: 3 reason: Success status: "True" type: Resolved installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a
- 1
- Verify that the version is updated to
1.12.1
.
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator or extension:
$ oc get operator.operators.operatorframework.io <operator_name> -o yaml
Example 7.22. Example output
oc get operator.operators.operatorframework.io pipelines-operator -o yaml apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","packageName":"openshift-pipelines-operator-rh","version":"2.0.0"}} creationTimestamp: "2024-02-06T17:47:15Z" generation: 1 name: pipelines-operator resourceVersion: "82667" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 2.0.0 status: conditions: - lastTransitionTime: "2024-02-06T17:47:21Z" message: installation has not been attempted due to failure to gather data for resolution observedGeneration: 1 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: "2024-02-06T17:47:21Z" message: no package "openshift-pipelines-operator-rh" matching version "2.0.0" found in channel "latest" observedGeneration: 1 reason: ResolutionFailed status: "False" type: Resolved
Additional resources
7.3.8.1. Support for semantic versioning
Support for semantic versioning (semver) is enabled in OLM 1.0 by default. Operator and extension authors can use the semver standard to define compatible updates.
Operator Lifecycle Manager (OLM) 1.0 can use an Operator or extension’s version number to determine if an update can be resolved successfully.
Cluster administrators can define a range of acceptable versions to install and automtically update. For Operators and extensions that follow the semver standard, you can use comparison strings to define to specify a desired version range.
OLM 1.0 does not support automatic updates to the next major version. If you want to perform a major version update, you must verify and apply the update manually. For more information, see "Forcing an update or rollback".
7.3.8.1.1. Major version zero releases
The semver standard specifies that major version zero releases (O.y.z
) are reserved for initial development. During the initial development stage, the API is not stable and breaking changes might be introduced in any published version. As a result, major version zero releases apply a special set of update conditions.
Update conditions for major version zero releases
-
You cannot apply automatic updates when the major and minor versions are both zero, such as
0.0.*
. For example, automatic updates with the version range of>=0.0.1 <0.1.0
are not allowed. -
You cannot apply automatic updates from one minor version to another within a major version zero release. For example, OLM 1.0 does not automatically apply an update from
0.1.0
to0.2.0
. -
You can apply automatic updates from patch versions, such as
>=0.1.0 <0.2.0
or>=0.2.0 <0.3.0
.
When an automatic update is blocked by OLM 1.0, you must manually verify and force the update by editing the Operator or extension’s custom resource (CR).
Additional resources
7.3.8.2. Support for version ranges
In Operator Lifecycle Manager (OLM) 1.0, you can specify a version range by using a comparison string in an Operator or extension’s custom resource (CR). If you specify a version range in the CR, OLM 1.0 installs or updates to the latest version of the Operator that can be resolved within the version range.
Resolved version workflow
- The resolved version is the latest version of the Operator that satisfies the dependencies and constraints of the Operator and the environment.
- An Operator update within the specified range is automatically installed if it is resolved successfully.
- An update is not installed if it is outside of the specified range or if it cannot be resolved successfully.
For more information about dependency and constraint resolution in OLM 1.0, see "Dependency resolution in OLM 1.0".
Additional resources
7.3.8.3. Version comparison strings
You can define a version range by adding a comparison string to the spec.version
field in an Operator or extension’s custom resource (CR). A comparison string is a list of space- or comma-separated values and one or more comparison operators enclosed in double quotation marks ("
). You can add another comparison string by including an OR
, or double vertical bar (||
), comparison operator between the strings.
Comparison operator | Definition |
---|---|
| Equal to |
| Not equal to |
| Greater than |
| Less than |
| Greater than or equal to |
| Less than or equal to |
You can specify a version range in an Operator or extension’s CR by using a range comparison similar to the following example:
Example version range comparison
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: ">=1.11, <1.13"
You can use wildcard characters in all types of comparison strings. OLM 1.0 accepts x
, X
, and asterisks (*
) as wildcard characters. When you use a wildcard character with the equal sign (=
) comparison operator, you define a comparison at the patch or minor version level.
Wildcard comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
You can make patch release comparisons by using the tilde (~
) comparison operator. Patch release comparisons specify a minor version up to the next major version.
Patch release comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
|
|
You can use the caret (^
) comparison operator to make a comparison for a major release. If you use a major release comparison before the first stable release is published, the minor versions define the API’s level of stability. In the semantic versioning (SemVer) specification, the first stable release is published as the 1.0.0
version.
Major release comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7.3.8.4. Example custom resources (CRs) that specify a target version
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR).
You can define a target version by specifying any of the following fields:
- Channel
- Version number
- Version range
If you specify a channel in the CR, OLM 1.0 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release that can be resolved from the channel.
Example CR with a specified channel
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
channel: latest 1
- 1
- Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed.
If you specify the Operator or extension’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator’s CR. Specifying an Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
version: 1.11.1 1
- 1
- Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version.
If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM 1.0 installs the latest version of an Operator or extension that can be resolved by the Operator Controller.
Example CR with a version range specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: pipelines-operator
spec:
packageName: openshift-pipelines-operator-rh
version: >1.11.1 1
- 1
- Specifies that the desired version range is greater than version
1.11.1
. For more information, see "Support for version ranges".
After you create or update a CR, apply the configuration file by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
7.3.8.5. Forcing an update or rollback
OLM 1.0 does not support automatic updates to the next major version or rollbacks to an earlier version. If you want to perform a major version update or rollback, you must verify and force the update manually.
You must verify the consequences of forcing a manual update or rollback. Failure to verify a forced update or rollback might have catastrophic consequences such as data loss.
Prerequisites
- You have a catalog installed.
- You have an Operator or extension installed.
Procedure
Edit the custom resource (CR) of your Operator or extension as shown in the following example:
Example CR
apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 version: <version> 3 upgradeConstraintPolicy: Ignore 4
- 1
- Specifies the name of the Operator or extension, such as
pipelines-operator
- 2
- Specifies the package name, such as
openshift-pipelines-operator-rh
. - 3
- Specifies the blocked update or rollback version.
- 4
- Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to
Ignore
. If unspecified, the default setting isEnforce
.
Apply the changes to your Operator or extensions CR by running the following command:
$ oc apply -f <extension_name>.yaml
Additional resources
7.3.9. Deleting an Operator
You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
$ oc delete operator.operators.operatorframework.io <operator_name>
Example output
operator.operators.operatorframework.io "<operator_name>" deleted
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
$ oc get operator.operators.operatorframework.io
Example output
No resources found
Verify that the Operator’s system namespace is deleted by running the following command:
$ oc get ns <operator_name>-system
Example output
Error from server (NotFound): namespaces "<operator_name>-system" not found
7.3.10. Deleting a catalog
You can delete a catalog by deleting its custom resource (CR).
Prerequisites
- You have a catalog installed.
Procedure
Delete a catalog by running the following command:
$ oc delete catalog <catalog_name>
Example output
catalog.catalogd.operatorframework.io "my-catalog" deleted
Verification
Verify the catalog is deleted by running the following command:
$ oc get catalog
7.4. Managing plain bundles in OLM 1.0 (Technology Preview)
In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental olm.bundle.mediatype
property of the olm.bundle
schema object differentiates a plain bundle (plain+v0
) from a regular (registry+v1
) bundle.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
- Build a plain bundle image.
- Create a file-based catalog.
- Add the plain bundle image to your file-based catalog.
- Build your catalog as an image.
- Publish your catalog image.
Additional resources
7.4.1. Prerequisites
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissionsNoteFor OpenShift Container Platform 4.15, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgrade
feature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc
) installed on your workstation -
The
opm
CLI installed on your workstation - Docker or Podman installed on your workstation
- Push access to a container registry, such as Quay
Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
Example directory structure
manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml
Additional resources
7.4.2. Building a plain bundle image from an image source
The Operator Controller currently supports installing plain bundles created only from a plain bundle image.
Procedure
At the root of your project, create a Dockerfile that can build a bundle image:
Example
plainbundle.Dockerfile
FROM scratch 1 ADD manifests /manifests
- 1
- Use the
FROM scratch
directive to make the size of the image smaller. No other files or directories are required in the bundle image.
Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:
$ podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> . 1
- 1
- Use an image tag that references a repository where you have push access privileges.
Push the image to your remote registry by running the following command:
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
7.4.3. Creating a file-based catalog
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
Procedure
Create a directory for the catalog by running the following command:
$ mkdir <catalog_dir>
Generate a Dockerfile that can build a catalog image by running the
opm generate dockerfile
command in the same directory level as the previous step:$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.15 1
- 1
- Specify the official Red Hat base image by using the
-i
flag, otherwise the Dockerfile uses the default upstream image.
NoteThe generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <catalog_dir> └── <catalog_dir>.Dockerfile
Populate the catalog with the package definition for your extension by running the
opm init
command:$ opm init <extension_name> \ --output json \ > <catalog_dir>/index.json
This command generates an
olm.package
declarative config blob in the specified catalog configuration file.
7.4.4. Adding a plain bundle to a file-based catalog
The opm render
command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.
Procedure
Verify that the
index.json
orindex.yaml
file for your catalog is similar to the following example:Example
<catalog_dir>/index.json
file{ { "schema": "olm.package", "name": "<extension_name>", "defaultChannel": "" } }
To create an
olm.bundle
blob, edit yourindex.json
orindex.yaml
file, similar to the following example:Example
<catalog_dir>/index.json
file witholm.bundle
blob{ "schema": "olm.bundle", "name": "<extension_name>.v<version>", "package": "<extension_name>", "image": "quay.io/<organization_name>/<repository_name>:<image_tag>", "properties": [ { "type": "olm.package", "value": { "packageName": "<extension_name>", "version": "<bundle_version>" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] }
To create an
olm.channel
blob, edit yourindex.json
orindex.yaml
file, similar to the following example:Example
<catalog_dir>/index.json
file witholm.channel
blob{ "schema": "olm.channel", "name": "<desired_channel_name>", "package": "<extension_name>", "entries": [ { "name": "<extension_name>.v<version>" } ] }
Verification
Open your
index.json
orindex.yaml
file and ensure it is similar to the following example:Example
<catalog_dir>/index.json
file{ "schema": "olm.package", "name": "example-extension", "defaultChannel": "preview" } { "schema": "olm.bundle", "name": "example-extension.v0.0.1", "package": "example-extension", "image": "quay.io/example-org/example-extension-bundle:v0.0.1", "properties": [ { "type": "olm.package", "value": { "packageName": "example-extension", "version": "0.0.1" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] } { "schema": "olm.channel", "name": "preview", "package": "example-extension", "entries": [ { "name": "example-extension.v0.0.1" } ] }
Validate your catalog by running the following command:
$ opm validate <catalog_dir>
7.4.5. Building and publishing a file-based catalog
Procedure
Build your file-based catalog as an image by running the following command:
$ podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .
Push your catalog image by running the following command:
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>