Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 7. OLM 1.0 (Technology Preview)
7.1. About Operator Lifecycle Manager 1.0 (Technology Preview) Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features:
- Fully declarative model that supports GitOps workflows
OLM 1.0 simplifies Operator management through two key APIs:
-
A new
Operator
API, provided asoperator.operators.operatorframework.io
by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles. -
The
Catalog
API, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
For more information, see Operator Controller and Catalogd.
-
A new
- Improved control over Operator updates
- With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator.
- Flexible Operator packaging format
Administrators can use file-based catalogs to install and manage the following types of content:
- OLM-based Operators, similar to the existing OLM experience
- Plain bundles, which are static collections of arbitrary Kubernetes manifests
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles.
7.1.1. Purpose Copier lienLien copié sur presse-papiers!
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.
The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition
(CRD) objects, to provide additional functionality to the cluster.
After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.
7.2. Components and architecture Copier lienLien copié sur presse-papiers!
7.2.1. OLM 1.0 components overview (Technology Preview) Copier lienLien copié sur presse-papiers!
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects:
- Operator Controller
- Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
- RukPak
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
- Catalogd
- Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.
7.2.2. Operator Controller (Technology Preview) Copier lienLien copié sur presse-papiers!
Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.2.1. Operator API Copier lienLien copié sur presse-papiers!
Operator Controller provides a new Operator
API object, which is a single resource that represents an instance of an installed Operator. This operator.operators.operatorframework.io
API streamlines management of installed Operators by consolidating user-facing APIs into a single object.
In OLM 1.0, Operator
objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription
and OperatorGroup
objects.
For more information about the earlier behavior, see Multitenancy and Operator colocation.
Example Operator
object
When using the OpenShift CLI (oc
), the Operator
resource provided with OLM 1.0 during this Technology Preview phase requires specifying the full <resource>.<group>
format: operator.operators.operatorframework.io
. For example:
oc get operator.operators.operatorframework.io
$ oc get operator.operators.operatorframework.io
If you specify only the Operator
resource without the API group, the CLI returns results for an earlier API (operator.operators.coreos.com
) that is unrelated to OLM 1.0.
7.2.2.1.1. About target versions in OLM 1.0 Copier lienLien copié sur presse-papiers!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
oc apply -f <extension_name>.yaml
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
oc get operator.operators.operatorframework.io <operator_name> -o yaml
$ oc get operator.operators.operatorframework.io <operator_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Rukpak (Technology Preview) Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.3.1. About RukPak Copier lienLien copié sur presse-papiers!
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
Common terminology
- Bundle
- A collection of Kubernetes manifests that define content to be deployed to a cluster
- Bundle image
- A container image that contains a bundle within its filesystem
- Bundle Git repository
- A Git repository that contains a bundle within a directory
- Provisioner
- Controllers that install and manage content on a Kubernetes cluster
- Bundle deployment
- Generates deployed instances of a bundle
7.2.3.2. About provisioners Copier lienLien copié sur presse-papiers!
RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle
and BundleDeployment
. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0
bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1
bundles.
Each provisioner is assigned a unique ID and is responsible for reconciling Bundle
and BundleDeployment
objects with a spec.provisionerClassName
field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0
bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
A provisioner places a watch on both Bundle
and BundleDeployment
resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle
resource onto the cluster. Then, given a BundleDeployment
resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.
7.2.3.3. Bundle Copier lienLien copié sur presse-papiers!
A RukPak Bundle
object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle
objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content.
Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz
file in a directory mounted into the provisioner pods. Each Bundle
object has an associated spec.provisionerClassName
field that indicates the Provisioner
object that watches and unpacks that particular bundle type.
Example Bundle
object configured to work with the plain provisioner
Bundles are considered immutable after they are created.
7.2.3.3.1. Bundle immutability Copier lienLien copié sur presse-papiers!
After a Bundle
object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate
object.
Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle
object events and, for any update to a bundle, checks whether the spec
field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle
object fields, such as metadata
or status
, are updated during the bundle’s lifecycle; it is only the spec
field that is considered immutable.
Applying a Bundle
object and then attempting to update its spec should fail. For example, the following example creates a bundle:
Example output
bundle.core.rukpak.io/combo-tag-ref created
bundle.core.rukpak.io/combo-tag-ref created
Then, patching the bundle to point to a newer tag returns an error:
oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
Example output
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle
object instead of updating it in-place.
7.2.3.3.1.1. Further immutability considerations Copier lienLien copié sur presse-papiers!
While the spec
field of the Bundle
object is immutable, it is still possible for a BundleDeployment
object to pivot to a newer version of bundle content without changing the underlying spec
field. This unintentional pivoting could occur in the following scenario:
-
A user sets an image tag, a Git branch, or a Git tag in the
spec.source
field of theBundle
object. - The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
- A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.
If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.
This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.
To be confident that the underlying Bundle
spec content does not change, use a digest-based image or a Git commit reference when creating the bundle.
7.2.3.3.2. Plain bundle spec Copier lienLien copié sur presse-papiers!
A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.
The currently implemented plain bundle format is the plain+v0
format. The name of the bundle format, plain+v0
, combines the type of bundle (plain
) with the current schema version (v0
).
The plain+v0
bundle format is at schema version v0
, which means it is an experimental format that is subject to change.
For example, the following shows the file tree in a plain+v0
bundle. It must have a manifests/
directory containing the Kubernetes resources required to deploy an application.
Example plain+v0
bundle file tree
The static manifests must be located in the manifests/
directory with at least one resource in it for the bundle to be a valid plain+v0
bundle that the provisioner can unpack. The manifests/
directory must also be flat; all manifests must be at the top-level with no subdirectories.
Do not include any content in the manifests/
directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply
command will result in an error. Multi-object YAML or JSON files are valid, as well.
7.2.3.3.3. Registry bundle spec Copier lienLien copié sur presse-papiers!
A registry bundle, or registry+v1
bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format.
7.2.3.4. BundleDeployment Copier lienLien copié sur presse-papiers!
A BundleDeployment
object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment
API to only those who require those permissions.
The RukPak BundleDeployment
API points to a Bundle
object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment
object might also include an embedded spec for a desired bundle.
Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.
The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.
Example BundleDeployment
object configured to work with the plain provisioner
7.2.4. Dependency resolution in OLM 1.0 (Technology Preview) Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.4.1. Concepts Copier lienLien copié sur presse-papiers!
There are a set of expectations from the user that the package manager should never do the following:
- Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
- Install a package whose constraints can not be met by the current set of installable packages
- Update a package in a way that breaks another that depends on it
7.2.4.1.1. Example: Successful resolution Copier lienLien copié sur presse-papiers!
A user wants to install packages A and B that have the following dependencies:
Package A |
Package B |
↓ (depends on) | ↓ (depends on) |
Package C |
Package D |
Additionally, the user wants to pin the version of A to v0.1.0
.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0
depends on Cv0.1.0
-
A pinned to
v0.1.0
- B depends on D
Output
Resolution set:
-
A
v0.1.0
-
B
latest
-
C
v0.1.0
-
D
latest
-
A
7.2.4.1.2. Example: Unsuccessful resolution Copier lienLien copié sur presse-papiers!
A user wants to install packages A and B that have the following dependencies:
Package A |
Package B |
↓ (depends on) | ↓ (depends on) |
Package C |
Package C |
Additionally, the user wants to pin the version of A to v0.1.0
.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0
depends on Cv0.1.0
-
A pinned to
v0.1.0
-
B
latest
depends on Cv0.2.0
Output
Resolution set:
-
Unable to resolve because A
v0.1.0
requires Cv0.1.0
, which conflicts with Blatest
requiring Cv0.2.0
-
Unable to resolve because A
7.2.5. Catalogd (Technology Preview) Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.5.1. About catalogs in OLM 1.0 Copier lienLien copié sur presse-papiers!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0 Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
Example Certified Operators catalog
Example Community Operators catalog
The following command adds a catalog to your cluster:
Command syntax
oc apply -f <catalog_name>.yaml
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml
.
7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview) Copier lienLien copié sur presse-papiers!
Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.3.1. Prerequisites Copier lienLien copié sur presse-papiers!
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissionsNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgrade
feature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc
) installed on your workstation
7.3.2. About catalogs in OLM 1.0 Copier lienLien copié sur presse-papiers!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.3.2.1. Red Hat-provided Operator catalogs in OLM 1.0 Copier lienLien copié sur presse-papiers!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
Example Certified Operators catalog
Example Community Operators catalog
The following command adds a catalog to your cluster:
Command syntax
oc apply -f <catalog_name>.yaml
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml
.
The following procedures use the Red Hat Operators catalog and the Quay Operator as examples.
7.3.3. About target versions in OLM 1.0 Copier lienLien copié sur presse-papiers!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
oc apply -f <extension_name>.yaml
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
oc get operator.operators.operatorframework.io <operator_name> -o yaml
$ oc get operator.operators.operatorframework.io <operator_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.4. Adding a catalog to a cluster Copier lienLien copié sur presse-papiers!
To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.
Procedure
Create a catalog custom resource (CR), similar to the following example:
Example
redhat-operators.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the catalog’s image in the
spec.source.image
field.
Add the catalog to your cluster by running the following command:
oc apply -f redhat-operators.yaml
$ oc apply -f redhat-operators.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
catalog.catalogd.operatorframework.io/redhat-operators created
catalog.catalogd.operatorframework.io/redhat-operators created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following commands to verify the status of your catalog:
Check if you catalog is available by running the following command:
oc get catalog
$ oc get catalog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE redhat-operators 20s
NAME AGE redhat-operators 20s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of your catalog by running the following command:
oc get catalogs.catalogd.operatorframework.io -o yaml
$ oc get catalogs.catalogd.operatorframework.io -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.5. Finding Operators to install from a catalog Copier lienLien copié sur presse-papiers!
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.
Prerequisite
- You have added a catalog to your cluster.
Procedure
Get a list of the Operators and extensions in the catalog by running the following command:
oc get packages
$ oc get packages
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 7.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the contents of an Operator or extension’s custom resource (CR) by running the following command:
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get package redhat-operators-quay-operator -o yaml
$ oc get package redhat-operators-quay-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 7.2. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.6. Installing an Operator Copier lienLien copié sur presse-papiers!
You can install an Operator from a catalog by creating an Operator custom resource (CR) and applying it to the cluster.
Prerequisite
- You have added a catalog to your cluster.
- You have inspected the details of an Operator to find what version you want to install.
Procedure
Create an Operator CR, similar to the following example:
Example
test-operator.yaml
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Operator CR to the cluster by running the following command:
oc apply -f test-operator.yaml
$ oc apply -f test-operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example created
operator.operators.operatorframework.io/quay-example created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the Operator’s CR in the YAML format by running the following command:
oc get operator.operators.operatorframework.io/quay-example -o yaml
$ oc get operator.operators.operatorframework.io/quay-example -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get information about your Operator’s controller manager pod by running the following command:
oc get pod -n quay-operator-system
$ oc get pod -n quay-operator-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28s
NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.7. Updating an Operator Copier lienLien copié sur presse-papiers!
You can update your Operator by manually editing your Operator’s custom resource (CR) and applying the changes.
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Inspect your Operator’s package contents to find which channels and versions are available for updating by running the following command:
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get package redhat-operators-quay-operator -o yaml
$ oc get package redhat-operators-quay-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit your Operator’s CR to update the version to
3.9.1
, as shown in the following example:Example
test-operator.yaml
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the version to
3.9.1
Apply the update to the cluster by running the following command:
oc apply -f test-operator.yaml
$ oc apply -f test-operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example configured
operator.operators.operatorframework.io/quay-example configured
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can patch and apply the changes to your Operator’s version from the CLI by running the following command:
oc patch operator.operators.operatorframework.io/quay-example -p \ '{"spec":{"version":"3.9.1"}}' \ --type=merge
$ oc patch operator.operators.operatorframework.io/quay-example -p \ '{"spec":{"version":"3.9.1"}}' \ --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example patched
operator.operators.operatorframework.io/quay-example patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the channel and version updates have been applied by running the following command:
oc get operator.operators.operatorframework.io/quay-example -o yaml
$ oc get operator.operators.operatorframework.io/quay-example -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Verify that the version is updated to
3.9.1
.
7.3.8. Deleting an Operator Copier lienLien copié sur presse-papiers!
You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
oc delete operator.operators.operatorframework.io quay-example
$ oc delete operator.operators.operatorframework.io quay-example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io "quay-example" deleted
operator.operators.operatorframework.io "quay-example" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
oc get operator.operators.operatorframework.io
$ oc get operator.operators.operatorframework.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
No resources found
No resources found
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Operator’s system namespace is deleted by running the following command:
oc get ns quay-operator-system
$ oc get ns quay-operator-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Error from server (NotFound): namespaces "quay-operator-system" not found
Error from server (NotFound): namespaces "quay-operator-system" not found
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.9. Deleting a catalog Copier lienLien copié sur presse-papiers!
You can delete a catalog by deleting its custom resource (CR).
Prerequisites
- You have a catalog installed.
Procedure
Delete a catalog by running the following command:
oc delete catalog <catalog_name>
$ oc delete catalog <catalog_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
catalog.catalogd.operatorframework.io "my-catalog" deleted
catalog.catalogd.operatorframework.io "my-catalog" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the catalog is deleted by running the following command:
oc get catalog
$ oc get catalog
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Managing plain bundles in OLM 1.0 (Technology Preview) Copier lienLien copié sur presse-papiers!
In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental olm.bundle.mediatype
property of the olm.bundle
schema object differentiates a plain bundle (plain+v0
) from a regular (registry+v1
) bundle.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
- Build a plain bundle image.
- Create a file-based catalog.
- Add the plain bundle image to your file-based catalog.
- Build your catalog as an image.
- Publish your catalog image.
7.4.1. Prerequisites Copier lienLien copié sur presse-papiers!
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissionsNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgrade
feature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgrade
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc
) installed on your workstation -
The
opm
CLI installed on your workstation - Docker or Podman installed on your workstation
- Push access to a container registry, such as Quay
Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
Example directory structure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.2. Building a plain bundle image from an image source Copier lienLien copié sur presse-papiers!
The Operator Controller currently supports installing plain bundles created only from a plain bundle image.
Procedure
At the root of your project, create a Dockerfile that can build a bundle image:
Example
plainbundle.Dockerfile
FROM scratch ADD manifests /manifests
FROM scratch
1 ADD manifests /manifests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
FROM scratch
directive to make the size of the image smaller. No other files or directories are required in the bundle image.
Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:
podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .
$ podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use an image tag that references a repository where you have push access privileges.
Push the image to your remote registry by running the following command:
podman push quay.io/<organization_name>/<repository_name>:<image_tag>
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.3. Creating a file-based catalog Copier lienLien copié sur presse-papiers!
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
Procedure
Create a directory for the catalog by running the following command:
mkdir <catalog_dir>
$ mkdir <catalog_dir>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a Dockerfile that can build a catalog image by running the
opm generate dockerfile
command in the same directory level as the previous step:opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 -i registry.redhat.io/openshift4/ose-operator-registry:v4.14
$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.14
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the official Red Hat base image by using the
-i
flag, otherwise the Dockerfile uses the default upstream image.
NoteThe generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <catalog_dir> └── <catalog_dir>.Dockerfile
. ├── <catalog_dir> └── <catalog_dir>.Dockerfile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Populate the catalog with the package definition for your extension by running the
opm init
command:opm init <extension_name> \ --output json \ > <catalog_dir>/index.json
$ opm init <extension_name> \ --output json \ > <catalog_dir>/index.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command generates an
olm.package
declarative config blob in the specified catalog configuration file.
7.4.4. Adding a plain bundle to a file-based catalog Copier lienLien copié sur presse-papiers!
The opm render
command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.
Procedure
Verify that the
index.json
orindex.yaml
file for your catalog is similar to the following example:Example
<catalog_dir>/index.json
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create an
olm.bundle
blob, edit yourindex.json
orindex.yaml
file, similar to the following example:Example
<catalog_dir>/index.json
file witholm.bundle
blobCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create an
olm.channel
blob, edit yourindex.json
orindex.yaml
file, similar to the following example:Example
<catalog_dir>/index.json
file witholm.channel
blobCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Open your
index.json
orindex.yaml
file and ensure it is similar to the following example:Example
<catalog_dir>/index.json
fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate your catalog by running the following command:
opm validate <catalog_dir>
$ opm validate <catalog_dir>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.5. Building and publishing a file-based catalog Copier lienLien copié sur presse-papiers!
Procedure
Build your file-based catalog as an image by running the following command:
podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> . quay.io/<organization_name>/<repository_name>:<image_tag> .
$ podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push your catalog image by running the following command:
podman push quay.io/<organization_name>/<repository_name>:<image_tag>
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow