Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 7. OLM 1.0 (Technology Preview)
7.1. About Operator Lifecycle Manager 1.0 (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features:
- Fully declarative model that supports GitOps workflows
OLM 1.0 simplifies Operator management through two key APIs:
-
A new API, provided as
Operatorby the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles.operator.operators.operatorframework.io -
The API, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
Catalog
For more information, see Operator Controller and Catalogd.
-
A new
- Improved control over Operator updates
- With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator.
- Flexible Operator packaging format
Administrators can use file-based catalogs to install and manage the following types of content:
- OLM-based Operators, similar to the existing OLM experience
- Plain bundles, which are static collections of arbitrary Kubernetes manifests
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles.
7.1.1. Purpose Link kopierenLink in die Zwischenablage kopiert!
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.
The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as
CustomResourceDefinition
After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.
7.2. Components and architecture Link kopierenLink in die Zwischenablage kopiert!
7.2.1. OLM 1.0 components overview (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects:
- Operator Controller
- Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
- RukPak
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
- Catalogd
- Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.
7.2.2. Operator Controller (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.2.1. Operator API Link kopierenLink in die Zwischenablage kopiert!
Operator Controller provides a new
Operator
operator.operators.operatorframework.io
In OLM 1.0,
Operator
Subscription
OperatorGroup
For more information about the earlier behavior, see Multitenancy and Operator colocation.
Example Operator object
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: <operator_name>
spec:
packageName: <package_name>
channel: <channel_name>
version: <version_number>
When using the OpenShift CLI (
oc
Operator
<resource>.<group>
operator.operators.operatorframework.io
$ oc get operator.operators.operatorframework.io
If you specify only the
Operator
operator.operators.coreos.com
7.2.2.1.1. About target versions in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: quay-example
spec:
packageName: quay-operator
channel: stable-3.8
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: quay-example
spec:
packageName: quay-operator
version: 3.8.12
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
$ oc get operator.operators.operatorframework.io <operator_name> -o yamlExample output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"999.99.9"}} creationTimestamp: "2023-10-19T18:39:37Z" generation: 3 name: quay-example resourceVersion: "51505" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 999.99.9 status: conditions: - lastTransitionTime: "2023-10-19T18:50:34Z" message: package 'quay-operator' at version '999.99.9' not found observedGeneration: 3 reason: ResolutionFailed status: "False" type: Resolved - lastTransitionTime: "2023-10-19T18:50:34Z" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed
7.2.3. Rukpak (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.3.1. About RukPak Link kopierenLink in die Zwischenablage kopiert!
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
Common terminology
- Bundle
- A collection of Kubernetes manifests that define content to be deployed to a cluster
- Bundle image
- A container image that contains a bundle within its filesystem
- Bundle Git repository
- A Git repository that contains a bundle within a directory
- Provisioner
- Controllers that install and manage content on a Kubernetes cluster
- Bundle deployment
- Generates deployed instances of a bundle
7.2.3.2. About provisioners Link kopierenLink in die Zwischenablage kopiert!
RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs:
Bundle
BundleDeployment
Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks
plain+v0
registry+v1
Each provisioner is assigned a unique ID and is responsible for reconciling
Bundle
BundleDeployment
spec.provisionerClassName
plain+v0
A provisioner places a watch on both
Bundle
BundleDeployment
Bundle
BundleDeployment
7.2.3.3. Bundle Link kopierenLink in die Zwischenablage kopiert!
A RukPak
Bundle
Bundle
Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a
tar.gz
Bundle
spec.provisionerClassName
Provisioner
Example Bundle object configured to work with the plain provisioner
apiVersion: core.rukpak.io/v1alpha1
kind: Bundle
metadata:
name: my-bundle
spec:
source:
type: image
image:
ref: my-bundle@sha256:xyz123
provisionerClassName: core-rukpak-io-plain
Bundles are considered immutable after they are created.
7.2.3.3.1. Bundle immutability Link kopierenLink in die Zwischenablage kopiert!
After a
Bundle
BundleTemplate
Bundle immutability is enforced by the core RukPak webhook. This webhook watches
Bundle
spec
Bundle
metadata
status
spec
Applying a
Bundle
$ oc apply -f -<<EOF
apiVersion: core.rukpak.io/v1alpha1
kind: Bundle
metadata:
name: combo-tag-ref
spec:
source:
type: git
git:
ref:
tag: v0.0.2
repository: https://github.com/operator-framework/combo
provisionerClassName: core-rukpak-io-plain
EOF
Example output
bundle.core.rukpak.io/combo-tag-ref created
Then, patching the bundle to point to a newer tag returns an error:
$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
Example output
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new
Bundle
7.2.3.3.1.1. Further immutability considerations Link kopierenLink in die Zwischenablage kopiert!
While the
spec
Bundle
BundleDeployment
spec
-
A user sets an image tag, a Git branch, or a Git tag in the field of the
spec.sourceobject.Bundle - The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
- A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.
If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.
This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.
To be confident that the underlying
Bundle
7.2.3.3.2. Plain bundle spec Link kopierenLink in die Zwischenablage kopiert!
A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.
The currently implemented plain bundle format is the
plain+v0
plain+v0
plain
v0
The
plain+v0
v0
For example, the following shows the file tree in a
plain+v0
manifests/
Example plain+v0 bundle file tree
$ tree manifests
manifests
├── namespace.yaml
├── service_account.yaml
├── cluster_role.yaml
├── cluster_role_binding.yaml
└── deployment.yaml
The static manifests must be located in the
manifests/
plain+v0
manifests/
Do not include any content in the
manifests/
oc apply
7.2.3.3.3. Registry bundle spec Link kopierenLink in die Zwischenablage kopiert!
A registry bundle, or
registry+v1
7.2.3.4. BundleDeployment Link kopierenLink in die Zwischenablage kopiert!
A
BundleDeployment
BundleDeployment
The RukPak
BundleDeployment
Bundle
BundleDeployment
Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.
The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.
Example BundleDeployment object configured to work with the plain provisioner
apiVersion: core.rukpak.io/v1alpha1
kind: BundleDeployment
metadata:
name: my-bundle-deployment
spec:
provisionerClassName: core-rukpak-io-plain
template:
metadata:
labels:
app: my-bundle
spec:
source:
type: image
image:
ref: my-bundle@sha256:xyz123
provisionerClassName: core-rukpak-io-plain
7.2.4. Dependency resolution in OLM 1.0 (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.4.1. Concepts Link kopierenLink in die Zwischenablage kopiert!
There are a set of expectations from the user that the package manager should never do the following:
- Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
- Install a package whose constraints can not be met by the current set of installable packages
- Update a package in a way that breaks another that depends on it
7.2.4.1.1. Example: Successful resolution Link kopierenLink in die Zwischenablage kopiert!
A user wants to install packages A and B that have the following dependencies:
| Package A
| Package B
|
| ↓ (depends on) | ↓ (depends on) |
| Package C
| Package D
|
Additionally, the user wants to pin the version of A to
v0.1.0
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A depends on C
v0.1.0v0.1.0 -
A pinned to
v0.1.0 - B depends on D
Output
Resolution set:
-
A
v0.1.0 -
B
latest -
C
v0.1.0 -
D
latest
-
A
7.2.4.1.2. Example: Unsuccessful resolution Link kopierenLink in die Zwischenablage kopiert!
A user wants to install packages A and B that have the following dependencies:
| Package A
| Package B
|
| ↓ (depends on) | ↓ (depends on) |
| Package C
| Package C
|
Additionally, the user wants to pin the version of A to
v0.1.0
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A depends on C
v0.1.0v0.1.0 -
A pinned to
v0.1.0 -
B depends on C
latestv0.2.0
Output
Resolution set:
-
Unable to resolve because A requires C
v0.1.0, which conflicts with Bv0.1.0requiring Clatestv0.2.0
-
Unable to resolve because A
7.2.5. Catalogd (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.5.1. About catalogs in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: redhat-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/redhat-operator-index:v4.14
Example Certified Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: certified-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/certified-operator-index:v4.14
Example Community Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: community-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/community-operator-index:v4.14
The following command adds a catalog to your cluster:
Command syntax
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml.
7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.3.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
Access to an OpenShift Container Platform cluster using an account with
permissionscluster-adminNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
feature set enabled on the clusterTechPreviewNoUpgradeWarningEnabling the
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.TechPreviewNoUpgrade-
The OpenShift CLI () installed on your workstation
oc
7.3.2. About catalogs in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.3.2.1. Red Hat-provided Operator catalogs in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: redhat-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/redhat-operator-index:v4.14
Example Certified Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: certified-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/certified-operator-index:v4.14
Example Community Operators catalog
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
name: community-operators
spec:
source:
type: image
image:
ref: registry.redhat.io/redhat/community-operator-index:v4.14
The following command adds a catalog to your cluster:
Command syntax
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml.
The following procedures use the Red Hat Operators catalog and the Quay Operator as examples.
7.3.3. About target versions in OLM 1.0 Link kopierenLink in die Zwischenablage kopiert!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: quay-example
spec:
packageName: quay-operator
channel: stable-3.8
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: quay-example
spec:
packageName: quay-operator
version: 3.8.12
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
$ oc get operator.operators.operatorframework.io <operator_name> -o yamlExample output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"999.99.9"}} creationTimestamp: "2023-10-19T18:39:37Z" generation: 3 name: quay-example resourceVersion: "51505" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 999.99.9 status: conditions: - lastTransitionTime: "2023-10-19T18:50:34Z" message: package 'quay-operator' at version '999.99.9' not found observedGeneration: 3 reason: ResolutionFailed status: "False" type: Resolved - lastTransitionTime: "2023-10-19T18:50:34Z" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed
7.3.4. Adding a catalog to a cluster Link kopierenLink in die Zwischenablage kopiert!
To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.
Procedure
Create a catalog custom resource (CR), similar to the following example:
Example
redhat-operators.yamlapiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.141 - 1
- Specify the catalog’s image in the
spec.source.imagefield.
Add the catalog to your cluster by running the following command:
$ oc apply -f redhat-operators.yamlExample output
catalog.catalogd.operatorframework.io/redhat-operators created
Verification
Run the following commands to verify the status of your catalog:
Check if you catalog is available by running the following command:
$ oc get catalogExample output
NAME AGE redhat-operators 20sCheck the status of your catalog by running the following command:
$ oc get catalogs.catalogd.operatorframework.io -o yamlExample output
apiVersion: v1 items: - apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"catalogd.operatorframework.io/v1alpha1","kind":"Catalog","metadata":{"annotations":{},"name":"redhat-operators"},"spec":{"source":{"image":{"ref":"registry.redhat.io/redhat/redhat-operator-index:v4.14"},"type":"image"}}} creationTimestamp: "2023-10-16T13:30:59Z" generation: 1 name: redhat-operators resourceVersion: "37304" uid: cf00c68c-4312-4e06-aa8a-299f0bbf496b spec: source: image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.14 type: image status:1 conditions: - lastTransitionTime: "2023-10-16T13:32:25Z" message: successfully unpacked the catalog image "registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f"2 reason: UnpackSuccessful3 status: "True" type: Unpacked phase: Unpacked4 resolvedSource: image: ref: registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f5 type: image kind: List metadata: resourceVersion: ""
7.3.5. Finding Operators to install from a catalog Link kopierenLink in die Zwischenablage kopiert!
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.
Prerequisite
- You have added a catalog to your cluster.
Procedure
Get a list of the Operators and extensions in the catalog by running the following command:
$ oc get packagesExample 7.1. Example output
NAME AGE redhat-operators-3scale-operator 5m27s redhat-operators-advanced-cluster-management 5m27s redhat-operators-amq-broker-rhel8 5m27s redhat-operators-amq-online 5m27s redhat-operators-amq-streams 5m27s redhat-operators-amq7-interconnect-operator 5m27s redhat-operators-ansible-automation-platform-operator 5m27s redhat-operators-ansible-cloud-addons-operator 5m27s redhat-operators-apicast-operator 5m27s redhat-operators-aws-efs-csi-driver-operator 5m27s redhat-operators-aws-load-balancer-operator 5m27s ...Inspect the contents of an Operator or extension’s custom resource (CR) by running the following command:
$ oc get package <catalog_name>-<package_name> -o yamlExample command
$ oc get package redhat-operators-quay-operator -o yamlExample 7.2. Example output
apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Package metadata: creationTimestamp: "2023-10-06T01:14:04Z" generation: 1 labels: catalog: redhat-operators name: redhat-operators-quay-operator ownerReferences: - apiVersion: catalogd.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Catalog name: redhat-operators uid: 403004b6-54a3-4471-8c90-63419f6a2c3e resourceVersion: "45196" uid: 252cfe74-936d-44fc-be5d-09a7be7e36f5 spec: catalog: name: redhat-operators channels: - entries: - name: quay-operator.v3.4.7 skips: - red-hat-quay.v3.3.4 - quay-operator.v3.4.6 - quay-operator.v3.4.5 - quay-operator.v3.4.4 - quay-operator.v3.4.3 - quay-operator.v3.4.2 - quay-operator.v3.4.1 - quay-operator.v3.4.0 name: quay-v3.4 - entries: - name: quay-operator.v3.5.7 replaces: quay-operator.v3.5.6 skipRange: '>=3.4.x <3.5.7' name: quay-v3.5 - entries: - name: quay-operator.v3.6.0 skipRange: '>=3.3.x <3.6.0' - name: quay-operator.v3.6.1 replaces: quay-operator.v3.6.0 skipRange: '>=3.3.x <3.6.1' - name: quay-operator.v3.6.10 replaces: quay-operator.v3.6.9 skipRange: '>=3.3.x <3.6.10' - name: quay-operator.v3.6.2 replaces: quay-operator.v3.6.1 skipRange: '>=3.3.x <3.6.2' - name: quay-operator.v3.6.4 replaces: quay-operator.v3.6.2 skipRange: '>=3.3.x <3.6.4' - name: quay-operator.v3.6.5 replaces: quay-operator.v3.6.4 skipRange: '>=3.3.x <3.6.5' - name: quay-operator.v3.6.6 replaces: quay-operator.v3.6.5 skipRange: '>=3.3.x <3.6.6' - name: quay-operator.v3.6.7 replaces: quay-operator.v3.6.6 skipRange: '>=3.3.x <3.6.7' - name: quay-operator.v3.6.8 replaces: quay-operator.v3.6.7 skipRange: '>=3.3.x <3.6.8' - name: quay-operator.v3.6.9 replaces: quay-operator.v3.6.8 skipRange: '>=3.3.x <3.6.9' name: stable-3.6 - entries: - name: quay-operator.v3.7.10 replaces: quay-operator.v3.7.9 skipRange: '>=3.4.x <3.7.10' - name: quay-operator.v3.7.11 replaces: quay-operator.v3.7.10 skipRange: '>=3.4.x <3.7.11' - name: quay-operator.v3.7.12 replaces: quay-operator.v3.7.11 skipRange: '>=3.4.x <3.7.12' - name: quay-operator.v3.7.13 replaces: quay-operator.v3.7.12 skipRange: '>=3.4.x <3.7.13' - name: quay-operator.v3.7.14 replaces: quay-operator.v3.7.13 skipRange: '>=3.4.x <3.7.14' name: stable-3.7 - entries: - name: quay-operator.v3.8.0 skipRange: '>=3.5.x <3.8.0' - name: quay-operator.v3.8.1 replaces: quay-operator.v3.8.0 skipRange: '>=3.5.x <3.8.1' - name: quay-operator.v3.8.10 replaces: quay-operator.v3.8.9 skipRange: '>=3.5.x <3.8.10' - name: quay-operator.v3.8.11 replaces: quay-operator.v3.8.10 skipRange: '>=3.5.x <3.8.11' - name: quay-operator.v3.8.12 replaces: quay-operator.v3.8.11 skipRange: '>=3.5.x <3.8.12' - name: quay-operator.v3.8.2 replaces: quay-operator.v3.8.1 skipRange: '>=3.5.x <3.8.2' - name: quay-operator.v3.8.3 replaces: quay-operator.v3.8.2 skipRange: '>=3.5.x <3.8.3' - name: quay-operator.v3.8.4 replaces: quay-operator.v3.8.3 skipRange: '>=3.5.x <3.8.4' - name: quay-operator.v3.8.5 replaces: quay-operator.v3.8.4 skipRange: '>=3.5.x <3.8.5' - name: quay-operator.v3.8.6 replaces: quay-operator.v3.8.5 skipRange: '>=3.5.x <3.8.6' - name: quay-operator.v3.8.7 replaces: quay-operator.v3.8.6 skipRange: '>=3.5.x <3.8.7' - name: quay-operator.v3.8.8 replaces: quay-operator.v3.8.7 skipRange: '>=3.5.x <3.8.8' - name: quay-operator.v3.8.9 replaces: quay-operator.v3.8.8 skipRange: '>=3.5.x <3.8.9' name: stable-3.8 - entries: - name: quay-operator.v3.9.0 skipRange: '>=3.6.x <3.9.0' - name: quay-operator.v3.9.1 replaces: quay-operator.v3.9.0 skipRange: '>=3.6.x <3.9.1' - name: quay-operator.v3.9.2 replaces: quay-operator.v3.9.1 skipRange: '>=3.6.x <3.9.2' name: stable-3.9 defaultChannel: stable-3.9 description: "" icon: data: PD94bWwgdmVyc2lvbj ... mediatype: image/svg+xml packageName: quay-operator status: {}
7.3.6. Installing an Operator Link kopierenLink in die Zwischenablage kopiert!
You can install an Operator from a catalog by creating an Operator custom resource (CR) and applying it to the cluster.
Prerequisite
- You have added a catalog to your cluster.
- You have inspected the details of an Operator to find what version you want to install.
Procedure
Create an Operator CR, similar to the following example:
Example
test-operator.yamlCRapiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.8.12Apply the Operator CR to the cluster by running the following command:
$ oc apply -f test-operator.yamlExample output
operator.operators.operatorframework.io/quay-example created
Verification
View the Operator’s CR in the YAML format by running the following command:
$ oc get operator.operators.operatorframework.io/quay-example -o yamlExample output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"3.8.12"}} creationTimestamp: "2023-10-19T18:39:37Z" generation: 1 name: quay-example resourceVersion: "45663" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 3.8.12 status: conditions: - lastTransitionTime: "2023-10-19T18:39:37Z" message: resolved to "registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7" observedGeneration: 1 reason: Success status: "True" type: Resolved - lastTransitionTime: "2023-10-19T18:39:46Z" message: installed from "registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7" observedGeneration: 1 reason: Success status: "True" type: Installed installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7 resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7Get information about your Operator’s controller manager pod by running the following command:
$ oc get pod -n quay-operator-systemExample output
NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28s
7.3.7. Updating an Operator Link kopierenLink in die Zwischenablage kopiert!
You can update your Operator by manually editing your Operator’s custom resource (CR) and applying the changes.
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Inspect your Operator’s package contents to find which channels and versions are available for updating by running the following command:
$ oc get package <catalog_name>-<package_name> -o yamlExample command
$ oc get package redhat-operators-quay-operator -o yamlEdit your Operator’s CR to update the version to
, as shown in the following example:3.9.1Example
test-operator.yamlCRapiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.9.11 - 1
- Update the version to
3.9.1
Apply the update to the cluster by running the following command:
$ oc apply -f test-operator.yamlExample output
operator.operators.operatorframework.io/quay-example configuredTipYou can patch and apply the changes to your Operator’s version from the CLI by running the following command:
$ oc patch operator.operators.operatorframework.io/quay-example -p \ '{"spec":{"version":"3.9.1"}}' \ --type=mergeExample output
operator.operators.operatorframework.io/quay-example patched
Verification
Verify that the channel and version updates have been applied by running the following command:
$ oc get operator.operators.operatorframework.io/quay-example -o yamlExample output
apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"3.9.1"}} creationTimestamp: "2023-10-19T18:39:37Z" generation: 2 name: quay-example resourceVersion: "47423" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 3.9.11 status: conditions: - lastTransitionTime: "2023-10-19T18:39:37Z" message: resolved to "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09" observedGeneration: 2 reason: Success status: "True" type: Resolved - lastTransitionTime: "2023-10-19T18:39:46Z" message: installed from "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09" observedGeneration: 2 reason: Success status: "True" type: Installed installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09 resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09- 1
- Verify that the version is updated to
3.9.1.
7.3.8. Deleting an Operator Link kopierenLink in die Zwischenablage kopiert!
You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
$ oc delete operator.operators.operatorframework.io quay-exampleExample output
operator.operators.operatorframework.io "quay-example" deleted
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
$ oc get operator.operators.operatorframework.ioExample output
No resources foundVerify that the Operator’s system namespace is deleted by running the following command:
$ oc get ns quay-operator-systemExample output
Error from server (NotFound): namespaces "quay-operator-system" not found
7.3.9. Deleting a catalog Link kopierenLink in die Zwischenablage kopiert!
You can delete a catalog by deleting its custom resource (CR).
Prerequisites
- You have a catalog installed.
Procedure
Delete a catalog by running the following command:
$ oc delete catalog <catalog_name>Example output
catalog.catalogd.operatorframework.io "my-catalog" deleted
Verification
Verify the catalog is deleted by running the following command:
$ oc get catalog
7.4. Managing plain bundles in OLM 1.0 (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental
olm.bundle.mediatype
olm.bundle
plain+v0
registry+v1
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
- Build a plain bundle image.
- Create a file-based catalog.
- Add the plain bundle image to your file-based catalog.
- Build your catalog as an image.
- Publish your catalog image.
7.4.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
Access to an OpenShift Container Platform cluster using an account with
permissionscluster-adminNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
feature set enabled on the clusterTechPreviewNoUpgradeWarningEnabling the
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.TechPreviewNoUpgrade-
The OpenShift CLI () installed on your workstation
oc -
The CLI installed on your workstation
opm - Docker or Podman installed on your workstation
- Push access to a container registry, such as Quay
Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
Example directory structure
manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml
7.4.2. Building a plain bundle image from an image source Link kopierenLink in die Zwischenablage kopiert!
The Operator Controller currently supports installing plain bundles created only from a plain bundle image.
Procedure
At the root of your project, create a Dockerfile that can build a bundle image:
Example
plainbundle.DockerfileFROM scratch1 ADD manifests /manifests- 1
- Use the
FROM scratchdirective to make the size of the image smaller. No other files or directories are required in the bundle image.
Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:
$ podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .1 - 1
- Use an image tag that references a repository where you have push access privileges.
Push the image to your remote registry by running the following command:
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
7.4.3. Creating a file-based catalog Link kopierenLink in die Zwischenablage kopiert!
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
Procedure
Create a directory for the catalog by running the following command:
$ mkdir <catalog_dir>Generate a Dockerfile that can build a catalog image by running the
command in the same directory level as the previous step:opm generate dockerfile$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.141 - 1
- Specify the official Red Hat base image by using the
-iflag, otherwise the Dockerfile uses the default upstream image.
NoteThe generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <catalog_dir> └── <catalog_dir>.DockerfilePopulate the catalog with the package definition for your extension by running the
command:opm init$ opm init <extension_name> \ --output json \ > <catalog_dir>/index.jsonThis command generates an
declarative config blob in the specified catalog configuration file.olm.package
7.4.4. Adding a plain bundle to a file-based catalog Link kopierenLink in die Zwischenablage kopiert!
The
opm render
Procedure
Verify that the
orindex.jsonfile for your catalog is similar to the following example:index.yamlExample
<catalog_dir>/index.jsonfile{ { "schema": "olm.package", "name": "<extension_name>", "defaultChannel": "" } }To create an
blob, edit yourolm.bundleorindex.jsonfile, similar to the following example:index.yamlExample
<catalog_dir>/index.jsonfile witholm.bundleblob{ "schema": "olm.bundle", "name": "<extension_name>.v<version>", "package": "<extension_name>", "image": "quay.io/<organization_name>/<repository_name>:<image_tag>", "properties": [ { "type": "olm.package", "value": { "packageName": "<extension_name>", "version": "<bundle_version>" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] }To create an
blob, edit yourolm.channelorindex.jsonfile, similar to the following example:index.yamlExample
<catalog_dir>/index.jsonfile witholm.channelblob{ "schema": "olm.channel", "name": "<desired_channel_name>", "package": "<extension_name>", "entries": [ { "name": "<extension_name>.v<version>" } ] }
Verification
Open your
orindex.jsonfile and ensure it is similar to the following example:index.yamlExample
<catalog_dir>/index.jsonfile{ "schema": "olm.package", "name": "example-extension", "defaultChannel": "preview" } { "schema": "olm.bundle", "name": "example-extension.v0.0.1", "package": "example-extension", "image": "quay.io/example-org/example-extension-bundle:v0.0.1", "properties": [ { "type": "olm.package", "value": { "packageName": "example-extension", "version": "0.0.1" } }, { "type": "olm.bundle.mediatype", "value": "plain+v0" } ] } { "schema": "olm.channel", "name": "preview", "package": "example-extension", "entries": [ { "name": "example-extension.v0.0.1" } ] }Validate your catalog by running the following command:
$ opm validate <catalog_dir>
7.4.5. Building and publishing a file-based catalog Link kopierenLink in die Zwischenablage kopiert!
Procedure
Build your file-based catalog as an image by running the following command:
$ podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .Push your catalog image by running the following command:
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>