Ricerca

Questo contenuto non è disponibile nella lingua selezionata.

Chapter 7. OLM 1.0 (Technology Preview)

download PDF

7.1. About Operator Lifecycle Manager 1.0 (Technology Preview)

Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features:

Fully declarative model that supports GitOps workflows

OLM 1.0 simplifies Operator management through two key APIs:

  • A new Operator API, provided as operator.operators.operatorframework.io by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles.
  • The Catalog API, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.

For more information, see Operator Controller and Catalogd.

Improved control over Operator updates
With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator.
Flexible Operator packaging format

Administrators can use file-based catalogs to install and manage the following types of content:

  • OLM-based Operators, similar to the existing OLM experience
  • Plain bundles, which are static collections of arbitrary Kubernetes manifests

In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles.

7.1.1. Purpose

The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.

The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition (CRD) objects, to provide additional functionality to the cluster.

After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.

7.2. Components and architecture

7.2.1. OLM 1.0 components overview (Technology Preview)

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects:

Operator Controller
Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
RukPak

RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.

RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.

Catalogd
Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.

7.2.2. Operator Controller (Technology Preview)

Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.2.2.1. Operator API

Operator Controller provides a new Operator API object, which is a single resource that represents an instance of an installed Operator. This operator.operators.operatorframework.io API streamlines management of installed Operators by consolidating user-facing APIs into a single object.

Important

In OLM 1.0, Operator objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription and OperatorGroup objects.

For more information about the earlier behavior, see Multitenancy and Operator colocation.

Example Operator object

apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
  name: <operator_name>
spec:
  packageName: <package_name>
  channel: <channel_name>
  version: <version_number>

Note

When using the OpenShift CLI (oc), the Operator resource provided with OLM 1.0 during this Technology Preview phase requires specifying the full <resource>.<group> format: operator.operators.operatorframework.io. For example:

$ oc get operator.operators.operatorframework.io

If you specify only the Operator resource without the API group, the CLI returns results for an earlier API (operator.operators.coreos.com) that is unrelated to OLM 1.0.

7.2.2.1.1. About target versions in OLM 1.0

In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).

If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.

Example CR with a specified channel

apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
  name: quay-example
spec:
  packageName: quay-operator
  channel: stable-3.8 1

1
Installs the latest release published to the specified channel. Updates to the channel are automatically installed.

If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.

If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.

Example CR with the target version specified

apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
  name: quay-example
spec:
  packageName: quay-operator
  version: 3.8.12 1

1
Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.

If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.

Warning

In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.

You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:

Command syntax

$ oc get package <catalog_name>-<package_name> -o yaml

After you create or update a CR, create or configure the Operator by running the following command:

Command syntax

$ oc apply -f <extension_name>.yaml

Troubleshooting

  • If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:

    $ oc get operator.operators.operatorframework.io <operator_name> -o yaml

    Example output

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"999.99.9"}}
      creationTimestamp: "2023-10-19T18:39:37Z"
      generation: 3
      name: quay-example
      resourceVersion: "51505"
      uid: 2558623b-8689-421c-8ed5-7b14234af166
    spec:
      packageName: quay-operator
      version: 999.99.9
    status:
      conditions:
      - lastTransitionTime: "2023-10-19T18:50:34Z"
        message: package 'quay-operator' at version '999.99.9' not found
        observedGeneration: 3
        reason: ResolutionFailed
        status: "False"
        type: Resolved
      - lastTransitionTime: "2023-10-19T18:50:34Z"
        message: installation has not been attempted as resolution failed
        observedGeneration: 3
        reason: InstallationStatusUnknown
        status: Unknown
        type: Installed

7.2.3. Rukpak (Technology Preview)

Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.2.3.1. About RukPak

RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.

RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.

At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.

Common terminology

Bundle
A collection of Kubernetes manifests that define content to be deployed to a cluster
Bundle image
A container image that contains a bundle within its filesystem
Bundle Git repository
A Git repository that contains a bundle within a directory
Provisioner
Controllers that install and manage content on a Kubernetes cluster
Bundle deployment
Generates deployed instances of a bundle

7.2.3.2. About provisioners

RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment. These components work together to bring content onto the cluster and install it, generating resources within the cluster.

Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles.

Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.

A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.

7.2.3.3. Bundle

A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content.

Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type.

Example Bundle object configured to work with the plain provisioner

apiVersion: core.rukpak.io/v1alpha1
kind: Bundle
metadata:
  name: my-bundle
spec:
  source:
    type: image
    image:
      ref: my-bundle@sha256:xyz123
  provisionerClassName: core-rukpak-io-plain

Note

Bundles are considered immutable after they are created.

7.2.3.3.1. Bundle immutability

After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object.

Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status, are updated during the bundle’s lifecycle; it is only the spec field that is considered immutable.

Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle:

$ oc apply -f -<<EOF
apiVersion: core.rukpak.io/v1alpha1
kind: Bundle
metadata:
  name: combo-tag-ref
spec:
  source:
    type: git
    git:
      ref:
        tag: v0.0.2
      repository: https://github.com/operator-framework/combo
  provisionerClassName: core-rukpak-io-plain
EOF

Example output

bundle.core.rukpak.io/combo-tag-ref created

Then, patching the bundle to point to a newer tag returns an error:

$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'

Example output

Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable

The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place.

Further immutability considerations

While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario:

  1. A user sets an image tag, a Git branch, or a Git tag in the spec.source field of the Bundle object.
  2. The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
  3. A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.

If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.

This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.

To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle.

7.2.3.3.2. Plain bundle spec

A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.

The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0, combines the type of bundle (plain) with the current schema version (v0).

Note

The plain+v0 bundle format is at schema version v0, which means it is an experimental format that is subject to change.

For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application.

Example plain+v0 bundle file tree

$ tree manifests

manifests
├── namespace.yaml
├── service_account.yaml
├── cluster_role.yaml
├── cluster_role_binding.yaml
└── deployment.yaml

The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories.

Important

Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well.

7.2.3.3.3. Registry bundle spec

A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format.

Additional resources

7.2.3.4. BundleDeployment

Warning

A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions.

The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle.

Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.

The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.

Example BundleDeployment object configured to work with the plain provisioner

apiVersion: core.rukpak.io/v1alpha1
kind: BundleDeployment
metadata:
  name: my-bundle-deployment
spec:
  provisionerClassName: core-rukpak-io-plain
  template:
    metadata:
      labels:
        app: my-bundle
    spec:
      source:
        type: image
        image:
          ref: my-bundle@sha256:xyz123
      provisionerClassName: core-rukpak-io-plain

7.2.4. Dependency resolution in OLM 1.0 (Technology Preview)

Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.2.4.1. Concepts

There are a set of expectations from the user that the package manager should never do the following:

  • Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
  • Install a package whose constraints can not be met by the current set of installable packages
  • Update a package in a way that breaks another that depends on it
7.2.4.1.1. Example: Successful resolution

A user wants to install packages A and B that have the following dependencies:

Package A v0.1.0

Package B latest

↓ (depends on)

↓ (depends on)

Package C v0.1.0

Package D latest

Additionally, the user wants to pin the version of A to v0.1.0.

Packages and constraints passed to OLM 1.0

Packages

  • A
  • B

Constraints

  • A v0.1.0 depends on C v0.1.0
  • A pinned to v0.1.0
  • B depends on D

Output

  • Resolution set:

    • A v0.1.0
    • B latest
    • C v0.1.0
    • D latest
7.2.4.1.2. Example: Unsuccessful resolution

A user wants to install packages A and B that have the following dependencies:

Package A v0.1.0

Package B latest

↓ (depends on)

↓ (depends on)

Package C v0.1.0

Package C v0.2.0

Additionally, the user wants to pin the version of A to v0.1.0.

Packages and constraints passed to OLM 1.0

Packages

  • A
  • B

Constraints

  • A v0.1.0 depends on C v0.1.0
  • A pinned to v0.1.0
  • B latest depends on C v0.2.0

Output

  • Resolution set:

    • Unable to resolve because A v0.1.0 requires C v0.1.0, which conflicts with B latest requiring C v0.2.0

7.2.5. Catalogd (Technology Preview)

Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.2.5.1. About catalogs in OLM 1.0

You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.

Additional resources

7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0

Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.

Example Red Hat Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: redhat-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/redhat-operator-index:v4.14

Example Certified Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: certified-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/certified-operator-index:v4.14

Example Community Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: community-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/community-operator-index:v4.14

The following command adds a catalog to your cluster:

Command syntax

$ oc apply -f <catalog_name>.yaml 1

1
Specifies the catalog CR, such as redhat-operators.yaml.

7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview)

Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.

In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.3.1. Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions

    Note

    For OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.

  • The TechPreviewNoUpgrade feature set enabled on the cluster

    Warning

    Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.

  • The OpenShift CLI (oc) installed on your workstation

7.3.2. About catalogs in OLM 1.0

You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.

Additional resources

7.3.2.1. Red Hat-provided Operator catalogs in OLM 1.0

Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.

Example Red Hat Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: redhat-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/redhat-operator-index:v4.14

Example Certified Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: certified-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/certified-operator-index:v4.14

Example Community Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: Catalog
metadata:
  name: community-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/community-operator-index:v4.14

The following command adds a catalog to your cluster:

Command syntax

$ oc apply -f <catalog_name>.yaml 1

1
Specifies the catalog CR, such as redhat-operators.yaml.
Note

The following procedures use the Red Hat Operators catalog and the Quay Operator as examples.

7.3.3. About target versions in OLM 1.0

In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).

If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.

Example CR with a specified channel

apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
  name: quay-example
spec:
  packageName: quay-operator
  channel: stable-3.8 1

1
Installs the latest release published to the specified channel. Updates to the channel are automatically installed.

If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.

If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.

Example CR with the target version specified

apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
  name: quay-example
spec:
  packageName: quay-operator
  version: 3.8.12 1

1
Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.

If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.

Warning

In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.

You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:

Command syntax

$ oc get package <catalog_name>-<package_name> -o yaml

After you create or update a CR, create or configure the Operator by running the following command:

Command syntax

$ oc apply -f <extension_name>.yaml

Troubleshooting

  • If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:

    $ oc get operator.operators.operatorframework.io <operator_name> -o yaml

    Example output

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"999.99.9"}}
      creationTimestamp: "2023-10-19T18:39:37Z"
      generation: 3
      name: quay-example
      resourceVersion: "51505"
      uid: 2558623b-8689-421c-8ed5-7b14234af166
    spec:
      packageName: quay-operator
      version: 999.99.9
    status:
      conditions:
      - lastTransitionTime: "2023-10-19T18:50:34Z"
        message: package 'quay-operator' at version '999.99.9' not found
        observedGeneration: 3
        reason: ResolutionFailed
        status: "False"
        type: Resolved
      - lastTransitionTime: "2023-10-19T18:50:34Z"
        message: installation has not been attempted as resolution failed
        observedGeneration: 3
        reason: InstallationStatusUnknown
        status: Unknown
        type: Installed

7.3.4. Adding a catalog to a cluster

To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.

Procedure

  1. Create a catalog custom resource (CR), similar to the following example:

    Example redhat-operators.yaml

    apiVersion: catalogd.operatorframework.io/v1alpha1
    kind: Catalog
    metadata:
      name: redhat-operators
    spec:
      source:
        type: image
        image:
          ref: registry.redhat.io/redhat/redhat-operator-index:v4.14 1

    1
    Specify the catalog’s image in the spec.source.image field.
  2. Add the catalog to your cluster by running the following command:

    $ oc apply -f redhat-operators.yaml

    Example output

    catalog.catalogd.operatorframework.io/redhat-operators created

Verification

  • Run the following commands to verify the status of your catalog:

    1. Check if you catalog is available by running the following command:

      $ oc get catalog

      Example output

      NAME                  AGE
      redhat-operators      20s

    2. Check the status of your catalog by running the following command:

      $ oc get catalogs.catalogd.operatorframework.io -o yaml

      Example output

      apiVersion: v1
      items:
      - apiVersion: catalogd.operatorframework.io/v1alpha1
        kind: Catalog
        metadata:
          annotations:
            kubectl.kubernetes.io/last-applied-configuration: |
              {"apiVersion":"catalogd.operatorframework.io/v1alpha1","kind":"Catalog","metadata":{"annotations":{},"name":"redhat-operators"},"spec":{"source":{"image":{"ref":"registry.redhat.io/redhat/redhat-operator-index:v4.14"},"type":"image"}}}
          creationTimestamp: "2023-10-16T13:30:59Z"
          generation: 1
          name: redhat-operators
          resourceVersion: "37304"
          uid: cf00c68c-4312-4e06-aa8a-299f0bbf496b
        spec:
          source:
            image:
              ref: registry.redhat.io/redhat/redhat-operator-index:v4.14
            type: image
        status: 1
          conditions:
          - lastTransitionTime: "2023-10-16T13:32:25Z"
            message: successfully unpacked the catalog image "registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f" 2
            reason: UnpackSuccessful 3
            status: "True"
            type: Unpacked
          phase: Unpacked 4
          resolvedSource:
            image:
              ref: registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f 5
            type: image
      kind: List
      metadata:
        resourceVersion: ""

      1
      Stanza describing the status of the catalog.
      2
      Output message of the status of the catalog.
      3
      Displays the reason the catalog is in the current state.
      4
      Displays the phase of the installion process.
      5
      Displays the image reference of the catalog.

7.3.5. Finding Operators to install from a catalog

After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.

Prerequisite

  • You have added a catalog to your cluster.

Procedure

  1. Get a list of the Operators and extensions in the catalog by running the following command:

    $ oc get packages

    Example 7.1. Example output

    NAME                                                        AGE
    redhat-operators-3scale-operator                            5m27s
    redhat-operators-advanced-cluster-management                5m27s
    redhat-operators-amq-broker-rhel8                           5m27s
    redhat-operators-amq-online                                 5m27s
    redhat-operators-amq-streams                                5m27s
    redhat-operators-amq7-interconnect-operator                 5m27s
    redhat-operators-ansible-automation-platform-operator       5m27s
    redhat-operators-ansible-cloud-addons-operator              5m27s
    redhat-operators-apicast-operator                           5m27s
    redhat-operators-aws-efs-csi-driver-operator                5m27s
    redhat-operators-aws-load-balancer-operator                 5m27s
    ...
  2. Inspect the contents of an Operator or extension’s custom resource (CR) by running the following command:

    $ oc get package <catalog_name>-<package_name> -o yaml

    Example command

    $ oc get package redhat-operators-quay-operator -o yaml

    Example 7.2. Example output

    apiVersion: catalogd.operatorframework.io/v1alpha1
    kind: Package
    metadata:
      creationTimestamp: "2023-10-06T01:14:04Z"
      generation: 1
      labels:
        catalog: redhat-operators
      name: redhat-operators-quay-operator
      ownerReferences:
      - apiVersion: catalogd.operatorframework.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: Catalog
        name: redhat-operators
        uid: 403004b6-54a3-4471-8c90-63419f6a2c3e
      resourceVersion: "45196"
      uid: 252cfe74-936d-44fc-be5d-09a7be7e36f5
    spec:
      catalog:
        name: redhat-operators
      channels:
      - entries:
        - name: quay-operator.v3.4.7
          skips:
          - red-hat-quay.v3.3.4
          - quay-operator.v3.4.6
          - quay-operator.v3.4.5
          - quay-operator.v3.4.4
          - quay-operator.v3.4.3
          - quay-operator.v3.4.2
          - quay-operator.v3.4.1
          - quay-operator.v3.4.0
        name: quay-v3.4
      - entries:
        - name: quay-operator.v3.5.7
          replaces: quay-operator.v3.5.6
          skipRange: '>=3.4.x <3.5.7'
        name: quay-v3.5
      - entries:
        - name: quay-operator.v3.6.0
          skipRange: '>=3.3.x <3.6.0'
        - name: quay-operator.v3.6.1
          replaces: quay-operator.v3.6.0
          skipRange: '>=3.3.x <3.6.1'
        - name: quay-operator.v3.6.10
          replaces: quay-operator.v3.6.9
          skipRange: '>=3.3.x <3.6.10'
        - name: quay-operator.v3.6.2
          replaces: quay-operator.v3.6.1
          skipRange: '>=3.3.x <3.6.2'
        - name: quay-operator.v3.6.4
          replaces: quay-operator.v3.6.2
          skipRange: '>=3.3.x <3.6.4'
        - name: quay-operator.v3.6.5
          replaces: quay-operator.v3.6.4
          skipRange: '>=3.3.x <3.6.5'
        - name: quay-operator.v3.6.6
          replaces: quay-operator.v3.6.5
          skipRange: '>=3.3.x <3.6.6'
        - name: quay-operator.v3.6.7
          replaces: quay-operator.v3.6.6
          skipRange: '>=3.3.x <3.6.7'
        - name: quay-operator.v3.6.8
          replaces: quay-operator.v3.6.7
          skipRange: '>=3.3.x <3.6.8'
        - name: quay-operator.v3.6.9
          replaces: quay-operator.v3.6.8
          skipRange: '>=3.3.x <3.6.9'
        name: stable-3.6
      - entries:
        - name: quay-operator.v3.7.10
          replaces: quay-operator.v3.7.9
          skipRange: '>=3.4.x <3.7.10'
        - name: quay-operator.v3.7.11
          replaces: quay-operator.v3.7.10
          skipRange: '>=3.4.x <3.7.11'
        - name: quay-operator.v3.7.12
          replaces: quay-operator.v3.7.11
          skipRange: '>=3.4.x <3.7.12'
        - name: quay-operator.v3.7.13
          replaces: quay-operator.v3.7.12
          skipRange: '>=3.4.x <3.7.13'
        - name: quay-operator.v3.7.14
          replaces: quay-operator.v3.7.13
          skipRange: '>=3.4.x <3.7.14'
        name: stable-3.7
      - entries:
        - name: quay-operator.v3.8.0
          skipRange: '>=3.5.x <3.8.0'
        - name: quay-operator.v3.8.1
          replaces: quay-operator.v3.8.0
          skipRange: '>=3.5.x <3.8.1'
        - name: quay-operator.v3.8.10
          replaces: quay-operator.v3.8.9
          skipRange: '>=3.5.x <3.8.10'
        - name: quay-operator.v3.8.11
          replaces: quay-operator.v3.8.10
          skipRange: '>=3.5.x <3.8.11'
        - name: quay-operator.v3.8.12
          replaces: quay-operator.v3.8.11
          skipRange: '>=3.5.x <3.8.12'
        - name: quay-operator.v3.8.2
          replaces: quay-operator.v3.8.1
          skipRange: '>=3.5.x <3.8.2'
        - name: quay-operator.v3.8.3
          replaces: quay-operator.v3.8.2
          skipRange: '>=3.5.x <3.8.3'
        - name: quay-operator.v3.8.4
          replaces: quay-operator.v3.8.3
          skipRange: '>=3.5.x <3.8.4'
        - name: quay-operator.v3.8.5
          replaces: quay-operator.v3.8.4
          skipRange: '>=3.5.x <3.8.5'
        - name: quay-operator.v3.8.6
          replaces: quay-operator.v3.8.5
          skipRange: '>=3.5.x <3.8.6'
        - name: quay-operator.v3.8.7
          replaces: quay-operator.v3.8.6
          skipRange: '>=3.5.x <3.8.7'
        - name: quay-operator.v3.8.8
          replaces: quay-operator.v3.8.7
          skipRange: '>=3.5.x <3.8.8'
        - name: quay-operator.v3.8.9
          replaces: quay-operator.v3.8.8
          skipRange: '>=3.5.x <3.8.9'
        name: stable-3.8
      - entries:
        - name: quay-operator.v3.9.0
          skipRange: '>=3.6.x <3.9.0'
        - name: quay-operator.v3.9.1
          replaces: quay-operator.v3.9.0
          skipRange: '>=3.6.x <3.9.1'
        - name: quay-operator.v3.9.2
          replaces: quay-operator.v3.9.1
          skipRange: '>=3.6.x <3.9.2'
        name: stable-3.9
      defaultChannel: stable-3.9
      description: ""
      icon:
        data: PD94bWwgdmVyc2lvbj ...
        mediatype: image/svg+xml
      packageName: quay-operator
    status: {}

7.3.6. Installing an Operator

You can install an Operator from a catalog by creating an Operator custom resource (CR) and applying it to the cluster.

Prerequisite

  • You have added a catalog to your cluster.
  • You have inspected the details of an Operator to find what version you want to install.

Procedure

  1. Create an Operator CR, similar to the following example:

    Example test-operator.yaml CR

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      name: quay-example
    spec:
      packageName: quay-operator
      version: 3.8.12

  2. Apply the Operator CR to the cluster by running the following command:

    $ oc apply -f test-operator.yaml

    Example output

    operator.operators.operatorframework.io/quay-example created

Verification

  1. View the Operator’s CR in the YAML format by running the following command:

    $ oc get operator.operators.operatorframework.io/quay-example -o yaml

    Example output

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"3.8.12"}}
      creationTimestamp: "2023-10-19T18:39:37Z"
      generation: 1
      name: quay-example
      resourceVersion: "45663"
      uid: 2558623b-8689-421c-8ed5-7b14234af166
    spec:
      packageName: quay-operator
      version: 3.8.12
    status:
      conditions:
      - lastTransitionTime: "2023-10-19T18:39:37Z"
        message: resolved to "registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7"
        observedGeneration: 1
        reason: Success
        status: "True"
        type: Resolved
      - lastTransitionTime: "2023-10-19T18:39:46Z"
        message: installed from "registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7"
        observedGeneration: 1
        reason: Success
        status: "True"
        type: Installed
      installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7
      resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7

  2. Get information about your Operator’s controller manager pod by running the following command:

    $ oc get pod -n quay-operator-system

    Example output

    NAME                                     READY   STATUS    RESTARTS   AGE
    quay-operator.v3.8.12-6677b5c98f-2kdtb   1/1     Running   0          2m28s

7.3.7. Updating an Operator

You can update your Operator by manually editing your Operator’s custom resource (CR) and applying the changes.

Prerequisites

  • You have a catalog installed.
  • You have an Operator installed.

Procedure

  1. Inspect your Operator’s package contents to find which channels and versions are available for updating by running the following command:

    $ oc get package <catalog_name>-<package_name> -o yaml

    Example command

    $ oc get package redhat-operators-quay-operator -o yaml

  2. Edit your Operator’s CR to update the version to 3.9.1, as shown in the following example:

    Example test-operator.yaml CR

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      name: quay-example
    spec:
      packageName: quay-operator
      version: 3.9.1 1

    1
    Update the version to 3.9.1
  3. Apply the update to the cluster by running the following command:

    $ oc apply -f test-operator.yaml

    Example output

    operator.operators.operatorframework.io/quay-example configured

    Tip

    You can patch and apply the changes to your Operator’s version from the CLI by running the following command:

    $ oc patch operator.operators.operatorframework.io/quay-example -p \
      '{"spec":{"version":"3.9.1"}}' \
      --type=merge

    Example output

    operator.operators.operatorframework.io/quay-example patched

Verification

  • Verify that the channel and version updates have been applied by running the following command:

    $ oc get operator.operators.operatorframework.io/quay-example -o yaml

    Example output

    apiVersion: operators.operatorframework.io/v1alpha1
    kind: Operator
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"3.9.1"}}
      creationTimestamp: "2023-10-19T18:39:37Z"
      generation: 2
      name: quay-example
      resourceVersion: "47423"
      uid: 2558623b-8689-421c-8ed5-7b14234af166
    spec:
      packageName: quay-operator
      version: 3.9.1 1
    status:
      conditions:
      - lastTransitionTime: "2023-10-19T18:39:37Z"
        message: resolved to "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09"
        observedGeneration: 2
        reason: Success
        status: "True"
        type: Resolved
      - lastTransitionTime: "2023-10-19T18:39:46Z"
        message: installed from "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09"
        observedGeneration: 2
        reason: Success
        status: "True"
        type: Installed
      installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09
      resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09

    1
    Verify that the version is updated to 3.9.1.

7.3.8. Deleting an Operator

You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).

Prerequisites

  • You have a catalog installed.
  • You have an Operator installed.

Procedure

  • Delete an Operator and its CRDs by running the following command:

    $ oc delete operator.operators.operatorframework.io quay-example

    Example output

    operator.operators.operatorframework.io "quay-example" deleted

Verification

  • Run the following commands to verify that your Operator and its resources were deleted:

    • Verify the Operator is deleted by running the following command:

      $ oc get operator.operators.operatorframework.io

      Example output

      No resources found

    • Verify that the Operator’s system namespace is deleted by running the following command:

      $ oc get ns quay-operator-system

      Example output

      Error from server (NotFound): namespaces "quay-operator-system" not found

7.3.9. Deleting a catalog

You can delete a catalog by deleting its custom resource (CR).

Prerequisites

  • You have a catalog installed.

Procedure

  • Delete a catalog by running the following command:

    $ oc delete catalog <catalog_name>

    Example output

    catalog.catalogd.operatorframework.io "my-catalog" deleted

Verification

  • Verify the catalog is deleted by running the following command:

    $ oc get catalog

7.4. Managing plain bundles in OLM 1.0 (Technology Preview)

In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental olm.bundle.mediatype property of the olm.bundle schema object differentiates a plain bundle (plain+v0) from a regular (registry+v1) bundle.

Important

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:

  1. Build a plain bundle image.
  2. Create a file-based catalog.
  3. Add the plain bundle image to your file-based catalog.
  4. Build your catalog as an image.
  5. Publish your catalog image.

7.4.1. Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions

    Note

    For OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.

  • The TechPreviewNoUpgrade feature set enabled on the cluster

    Warning

    Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.

  • The OpenShift CLI (oc) installed on your workstation
  • The opm CLI installed on your workstation
  • Docker or Podman installed on your workstation
  • Push access to a container registry, such as Quay
  • Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:

    Example directory structure

    manifests
    ├── namespace.yaml
    ├── service_account.yaml
    ├── cluster_role.yaml
    ├── cluster_role_binding.yaml
    └── deployment.yaml

7.4.2. Building a plain bundle image from an image source

The Operator Controller currently supports installing plain bundles created only from a plain bundle image.

Procedure

  1. At the root of your project, create a Dockerfile that can build a bundle image:

    Example plainbundle.Dockerfile

    FROM scratch 1
    ADD manifests /manifests

    1
    Use the FROM scratch directive to make the size of the image smaller. No other files or directories are required in the bundle image.
  2. Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:

    $ podman build -f plainbundle.Dockerfile -t \
        quay.io/<organization_name>/<repository_name>:<image_tag> . 1
    1
    Use an image tag that references a repository where you have push access privileges.
  3. Push the image to your remote registry by running the following command:

    $ podman push quay.io/<organization_name>/<repository_name>:<image_tag>

7.4.3. Creating a file-based catalog

If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.

Procedure

  1. Create a directory for the catalog by running the following command:

    $ mkdir <catalog_dir>
  2. Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command in the same directory level as the previous step:

    $ opm generate dockerfile <catalog_dir> \
        -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 1
    1
    Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image.
    Note

    The generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:

    Example directory structure

    .
    ├── <catalog_dir>
    └── <catalog_dir>.Dockerfile

  3. Populate the catalog with the package definition for your extension by running the opm init command:

    $ opm init <extension_name> \
        --output json \
        > <catalog_dir>/index.json

    This command generates an olm.package declarative config blob in the specified catalog configuration file.

7.4.4. Adding a plain bundle to a file-based catalog

The opm render command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.

Procedure

  1. Verify that the index.json or index.yaml file for your catalog is similar to the following example:

    Example <catalog_dir>/index.json file

    {
        {
         "schema": "olm.package",
         "name": "<extension_name>",
         "defaultChannel": ""
        }
    }

  2. To create an olm.bundle blob, edit your index.json or index.yaml file, similar to the following example:

    Example <catalog_dir>/index.json file with olm.bundle blob

    {
       "schema": "olm.bundle",
        "name": "<extension_name>.v<version>",
        "package": "<extension_name>",
        "image": "quay.io/<organization_name>/<repository_name>:<image_tag>",
        "properties": [
            {
                "type": "olm.package",
                "value": {
                "packageName": "<extension_name>",
                "version": "<bundle_version>"
                }
            },
            {
                "type": "olm.bundle.mediatype",
                "value": "plain+v0"
            }
      ]
    }

  3. To create an olm.channel blob, edit your index.json or index.yaml file, similar to the following example:

    Example <catalog_dir>/index.json file with olm.channel blob

    {
        "schema": "olm.channel",
        "name": "<desired_channel_name>",
        "package": "<extension_name>",
        "entries": [
            {
                "name": "<extension_name>.v<version>"
            }
        ]
    }

Verification

  1. Open your index.json or index.yaml file and ensure it is similar to the following example:

    Example <catalog_dir>/index.json file

    {
        "schema": "olm.package",
        "name": "example-extension",
        "defaultChannel": "preview"
    }
    {
        "schema": "olm.bundle",
        "name": "example-extension.v0.0.1",
        "package": "example-extension",
        "image": "quay.io/example-org/example-extension-bundle:v0.0.1",
        "properties": [
            {
                "type": "olm.package",
                "value": {
                "packageName": "example-extension",
                "version": "0.0.1"
                }
            },
            {
                "type": "olm.bundle.mediatype",
                "value": "plain+v0"
            }
        ]
    }
    {
        "schema": "olm.channel",
        "name": "preview",
        "package": "example-extension",
        "entries": [
            {
                "name": "example-extension.v0.0.1"
            }
        ]
    }

  2. Validate your catalog by running the following command:

    $ opm validate <catalog_dir>

7.4.5. Building and publishing a file-based catalog

Procedure

  1. Build your file-based catalog as an image by running the following command:

    $ podman build -f <catalog_dir>.Dockerfile -t \
        quay.io/<organization_name>/<repository_name>:<image_tag> .
  2. Push your catalog image by running the following command:

    $ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.