Chapter 4. Catalogs


4.1. File-based catalogs

Important

Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Operator Lifecycle Manager (OLM) v1 in OpenShift Container Platform supports file-based catalogs for discovering and sourcing cluster extensions, including Operators, on a cluster.

Important

Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

4.1.1. Highlights

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility.

Editing

With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI.

This editability enables the following features and user-defined extensions:

  • Promoting an existing bundle to a new channel
  • Changing the default channel of a package
  • Custom algorithms for adding, updating, and removing upgrade edges
Composability

File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB. A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it.

This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these.

Note

Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found.

Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users.

Extensibility

The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations.

For example, a tool could translate a high-level API, such as (mode=semver), down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria.

While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well.

4.1.2. Directory structure

File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur.

Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files.

Example .indexignore file

# Ignore everything except non-object .json and .yaml files
**/*
!*.json
!*.yaml
**/objects/*.json
**/objects/*.yaml

Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package’s file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format.

Basic recommended structure

catalog
├── packageA
│   └── index.yaml
├── packageB
│   ├── .indexignore
│   ├── index.yaml
│   └── objects
│       └── packageB.v0.1.0.clusterserviceversion.yaml
└── packageC
    └── index.json
    └── deprecations.yaml

This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog’s root directory.

4.1.3. Schemas

File-based catalogs use a format, based on the CUE language specification, that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to:

_Meta schema

_Meta: {
  // schema is required and must be a non-empty string
  schema: string & !=""

  // package is optional, but if it's defined, it must be a non-empty string
  package?: string & !=""

  // properties is optional, but if it's defined, it must be a list of 0 or more properties
  properties?: [... #Property]
}

#Property: {
  // type is required
  type: string & !=""

  // value is required, and it must not be null
  value: !=null
}

Note

No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE.

An Operator Lifecycle Manager (OLM) catalog currently uses three schemas (olm.package, olm.channel, and olm.bundle), which correspond to OLM’s existing package and bundle concepts.

Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs.

Note

All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own.

4.1.3.1. olm.package schema

The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon.

Example 4.1. olm.package schema

#Package: {
  schema: "olm.package"

  // Package name
  name: string & !=""

  // A description of the package
  description?: string

  // The package's default channel
  defaultChannel: string & !=""

  // An optional icon
  icon?: {
    base64data: string
    mediatype:  string
  }
}

4.1.3.2. olm.channel schema

The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles.

If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel.

It is valid for an entry’s replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads.

Example 4.2. olm.channel schema

#Channel: {
  schema: "olm.channel"
  package: string & !=""
  name: string & !=""
  entries: [...#ChannelEntry]
}

#ChannelEntry: {
  // name is required. It is the name of an `olm.bundle` that
  // is present in the channel.
  name: string & !=""

  // replaces is optional. It is the name of bundle that is replaced
  // by this entry. It does not have to be present in the entry list.
  replaces?: string & !=""

  // skips is optional. It is a list of bundle names that are skipped by
  // this entry. The skipped bundles do not have to be present in the
  // entry list.
  skips?: [...string & !=""]

  // skipRange is optional. It is the semver range of bundle versions
  // that are skipped by this entry.
  skipRange?: string & !=""
}
Warning

When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects.

You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate previous version of the Operator version in question.

4.1.3.3. olm.bundle schema

Example 4.3. olm.bundle schema

#Bundle: {
  schema: "olm.bundle"
  package: string & !=""
  name: string & !=""
  image: string & !=""
  properties: [...#Property]
  relatedImages?: [...#RelatedImage]
}

#Property: {
  // type is required
  type: string & !=""

  // value is required, and it must not be null
  value: !=null
}

#RelatedImage: {
  // image is the image reference
  image: string & !=""

  // name is an optional descriptive name for an image that
  // helps identify its purpose in the context of the bundle
  name?: string & !=""
}

4.1.3.4. olm.deprecations schema

The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog.

When this schema is defined, the OpenShift Container Platform web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub.

An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object.

Table 4.1. Deprecation reference types
TypeScopeStatus condition

olm.package

Represents the entire package

PackageDeprecated

olm.channel

Represents one channel

ChannelDeprecated

olm.bundle

Represents one bundle version

BundleDeprecated

Each reference type has their own requirements, as detailed in the following example.

Example 4.4. Example olm.deprecations schema with each reference type

schema: olm.deprecations
package: my-operator 1
entries:
  - reference:
      schema: olm.package 2
    message: | 3
    The 'my-operator' package is end of life. Please use the
    'my-operator-new' package for support.
  - reference:
      schema: olm.channel
      name: alpha 4
    message: |
    The 'alpha' channel is no longer supported. Please switch to the
    'stable' channel.
  - reference:
      schema: olm.bundle
      name: my-operator.v1.68.0 5
    message: |
    my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and
    install my-operator.v1.72.0 for support.
1
Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field.
2
The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema.
3
All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob.
4
The name field for the olm.channel schema is required.
5
The name field for the olm.bundle schema is required.
Note

The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle.

Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package’s index.yaml file:

Example directory structure for a catalog with deprecations

my-catalog
└── my-operator
    ├── index.yaml
    └── deprecations.yaml

4.1.4. Properties

Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML.

OLM defines a handful of property types, again using the reserved olm.* prefix.

4.1.4.1. olm.package property

The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle’s first-class package field, and the version field must be a valid semantic version.

Example 4.5. olm.package property

#PropertyPackage: {
  type: "olm.package"
  value: {
    packageName: string & !=""
    version: string & !=""
  }
}

4.1.4.2. olm.gvk property

The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations.

Example 4.6. olm.gvk property

#PropertyGVK: {
  type: "olm.gvk"
  value: {
    group: string & !=""
    version: string & !=""
    kind: string & !=""
  }
}

4.1.4.3. olm.package.required

The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range.

Example 4.7. olm.package.required property

#PropertyPackageRequired: {
  type: "olm.package.required"
  value: {
    packageName: string & !=""
    versionRange: string & !=""
  }
}

4.1.4.4. olm.gvk.required

The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations.

Example 4.8. olm.gvk.required property

#PropertyGVKRequired: {
  type: "olm.gvk.required"
  value: {
    group: string & !=""
    version: string & !=""
    kind: string & !=""
  }
}

4.1.5. Example catalog

With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog’s root directory.

There are many possible ways to build a file-based catalog; the following steps outline a simple approach:

  1. Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog:

    Example catalog configuration file

    name: community-operators
    repo: quay.io/community-operators/catalog
    tag: latest
    references:
    - name: etcd-operator
      image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03
    - name: prometheus-operator
      image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317

  2. Run a script that parses the configuration file and creates a new catalog from its references:

    Example script

    name=$(yq eval '.name' catalog.yaml)
    mkdir "$name"
    yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir
    for l in $(yq e '.name as $catalog | .references[] | .image + "|" + $catalog + "/" + .name + "/index.yaml"' catalog.yaml); do
      image=$(echo $l | cut -d'|' -f1)
      file=$(echo $l | cut -d'|' -f2)
      opm render "$image" > "$file"
    done
    opm generate dockerfile "$name"
    indexImage=$(yq eval '.repo + ":" + .tag' catalog.yaml)
    docker build -t "$indexImage" -f "$name.Dockerfile" .
    docker push "$indexImage"

4.1.6. Guidelines

Consider the following guidelines when maintaining file-based catalogs.

4.1.6.1. Immutable bundles

The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable.

If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog.

However, there are some cases where a change in the catalog metadata is preferred:

  • Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob.
  • New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4, but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4.

4.1.6.2. Source control

Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps:

  1. Update the source-controlled catalog directory with a new commit.
  2. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version>, so that users can receive updates to a catalog as they become available.

4.1.7. CLI usage

For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs.

For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools.

4.1.8. Automation

Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks:

  • Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package’s image reference.
  • Check that the catalog updates pass the opm validate command.
  • Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed.
  • Automatically merge PRs that pass the previous checks.
  • Automatically rebuild and republish the catalog image.

4.2. Red Hat-provided catalogs

Important

Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default.

Important

Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

4.2.1. About Red Hat-provided Operator catalogs

The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces.

The following Operator catalogs are distributed by Red Hat:

CatalogIndex imageDescription

redhat-operators

registry.redhat.io/redhat/redhat-operator-index:v4.17

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

registry.redhat.io/redhat/certified-operator-index:v4.17

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

redhat-marketplace

registry.redhat.io/redhat/redhat-marketplace-index:v4.17

Certified software that can be purchased from Red Hat Marketplace.

community-operators

registry.redhat.io/redhat/community-operator-index:v4.17

Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support.

During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:

registry.redhat.io/redhat/redhat-operator-index:v4.8

to:

registry.redhat.io/redhat/redhat-operator-index:v4.9

4.3. Managing catalogs

Important

Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.

You can manage catalogs and extensions declaratively from the CLI by using custom resources (CRs).

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.

Important

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.

If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.

4.3.1. About catalogs in OLM v1

You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.

Important

If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons:

  • If mulitple catalogs are installed on a cluster, Operator Lifecycle Manager (OLM) v1 does not include a mechanism to specify a catalog when you install an Operator or extension.
  • OLM v1 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages.

Additional resources

4.3.2. Red Hat-provided Operator catalogs in OLM v1

Operator Lifecycle Manager (OLM) v1 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM v1.

Important
  • Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

  • If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace.

    For more information, see "Creating a pull secret for catalogs hosted on a secure registry".

Example Red Hat Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: ClusterCatalog
metadata:
  name: redhat-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/redhat-operator-index:v4.17
      pullSecret: <pull_secret_name>
      pollInterval: <poll_interval_duration> 1

1
Specify the interval for polling the remote registry for newer image digests. The default value is 24h. Valid units include seconds (s), minutes (m), and hours (h). To disable polling, set a zero value, such as 0s.

Example Certified Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: ClusterCatalog
metadata:
  name: certified-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/certified-operator-index:v4.17
      pullSecret: <pull_secret_name>
      pollInterval: 24h

Example Community Operators catalog

apiVersion: catalogd.operatorframework.io/v1alpha1
kind: ClusterCatalog
metadata:
  name: community-operators
spec:
  source:
    type: image
    image:
      ref: registry.redhat.io/redhat/community-operator-index:v4.17
      pullSecret: <pull_secret_name>
      pollInterval: 24h

The following command adds a catalog to your cluster:

Command syntax

$ oc apply -f <catalog_name>.yaml 1

1
Specifies the catalog CR, such as redhat-operators.yaml.

4.3.3. Creating a pull secret for catalogs hosted on a private registry

If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace.

Catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed.

Important

Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

Prerequisites

  • Login credentials for the secure registry
  • Docker or Podman installed on your workstation

Procedure

  • If you already have a .dockercfg file with login credentials for the secure registry, create a pull secret by running the following command:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockercfg=<file_path>/.dockercfg \
        --type=kubernetes.io/dockercfg \
        --namespace=openshift-catalogd

    Example 4.9. Example command

    $ oc create secret generic redhat-cred \
        --from-file=.dockercfg=/home/<username>/.dockercfg \
        --type=kubernetes.io/dockercfg \
        --namespace=openshift-catalogd
  • If you already have a $HOME/.docker/config.json file with login credentials for the secured registry, create a pull secret by running the following command:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockerconfigjson=<file_path>/.docker/config.json \
        --type=kubernetes.io/dockerconfigjson \
        --namespace=openshift-catalogd

    Example 4.10. Example command

    $ oc create secret generic redhat-cred \
        --from-file=.dockerconfigjson=/home/<username>/.docker/config.json \
        --type=kubernetes.io/dockerconfigjson \
        --namespace=openshift-catalogd
  • If you do not have a Docker configuration file with login credentials for the secure registry, create a pull secret by running the following command:

    $ oc create secret docker-registry <pull_secret_name> \
        --docker-server=<registry_server> \
        --docker-username=<username> \
        --docker-password=<password> \
        --docker-email=<email> \
        --namespace=openshift-catalogd

    Example 4.11. Example command

    $ oc create secret docker-registry redhat-cred \
        --docker-server=registry.redhat.io \
        --docker-username=username \
        --docker-password=password \
        --docker-email=user@example.com \
        --namespace=openshift-catalogd

4.3.4. Adding a catalog to a cluster

To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.

Important

Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

Prerequisites

  • If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace.

    Catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed.

Procedure

  1. Create a catalog custom resource (CR), similar to the following example:

    Example redhat-operators.yaml

    apiVersion: catalogd.operatorframework.io/v1alpha1
    kind: ClusterCatalog
    metadata:
      name: redhat-operators
    spec:
      source:
        type: image
        image:
          ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 1
          pullSecret: <pull_secret_name> 2
          pollInterval: <poll_interval_duration> 3

    1
    Specify the catalog’s image in the spec.source.image field.
    2
    If your catalog is hosted on a secure registry, such as registry.redhat.io, you must create a pull secret scoped to the openshift-catalog namespace.
    3
    Specify the interval for polling the remote registry for newer image digests. The default value is 24h. Valid units include seconds (s), minutes (m), and hours (h). To disable polling, set a zero value, such as 0s.
  2. Add the catalog to your cluster by running the following command:

    $ oc apply -f redhat-operators.yaml

    Example output

    catalog.catalogd.operatorframework.io/redhat-operators created

Verification

  • Run the following commands to verify the status of your catalog:

    1. Check if you catalog is available by running the following command:

      $ oc get clustercatalog

      Example output

      NAME                  AGE
      redhat-operators      20s

    2. Check the status of your catalog by running the following command:

      $ oc describe clustercatalog

      Example output

      Name:         redhat-operators
      Namespace:
      Labels:       <none>
      Annotations:  <none>
      API Version:  catalogd.operatorframework.io/v1alpha1
      Kind:         ClusterCatalog
      Metadata:
        Creation Timestamp:  2024-06-10T17:34:53Z
        Finalizers:
          catalogd.operatorframework.io/delete-server-cache
        Generation:        1
        Resource Version:  46075
        UID:               83c0db3c-a553-41da-b279-9b3cddaa117d
      Spec:
        Source:
          Image:
            Pull Secret:  redhat-cred
            Ref:          registry.redhat.io/redhat/redhat-operator-index:v4.17
          Type:           image
      Status: 1
        Conditions:
          Last Transition Time:  2024-06-10T17:35:15Z
          Message:
          Reason:                UnpackSuccessful 2
          Status:                True
          Type:                  Unpacked
        Content URL:             https://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json
        Observed Generation:     1
        Phase:                   Unpacked 3
        Resolved Source:
          Image:
            Last Poll Attempt:  2024-06-10T17:35:10Z
            Ref:                registry.redhat.io/redhat/redhat-operator-index:v4.17
            Resolved Ref:       registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 4
          Type:                 image
      Events:                   <none>

      1
      Describes the status of the catalog.
      2
      Displays the reason the catalog is in the current state.
      3
      Displays the phase of the installation process.
      4
      Displays the image reference of the catalog.

4.3.5. Deleting a catalog

You can delete a catalog by deleting its custom resource (CR).

Prerequisites

  • You have a catalog installed.

Procedure

  • Delete a catalog by running the following command:

    $ oc delete clustercatalog <catalog_name>

    Example output

    catalog.catalogd.operatorframework.io "my-catalog" deleted

Verification

  • Verify the catalog is deleted by running the following command:

    $ oc get clustercatalog

4.4. Creating catalogs

Important

Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Catalog maintainers can create new catalogs in the file-based catalog format for use with Operator Lifecycle Manager (OLM) v1 on OpenShift Container Platform.

Important

Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. (OCPBUGS-36364)

4.4.1. Creating a file-based catalog image

You can use the opm CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format.

Prerequisites

  • You have installed the opm CLI.
  • You have podman version 1.9.3+.
  • A bundle image is built and pushed to a registry that supports Docker v2-2.

Procedure

  1. Initialize the catalog:

    1. Create a directory for the catalog by running the following command:

      $ mkdir <catalog_dir>
    2. Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command:

      $ opm generate dockerfile <catalog_dir> \
          -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 1
      1
      Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image.

      The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:

      Example directory structure

      . 1
      ├── <catalog_dir> 2
      └── <catalog_dir>.Dockerfile 3

      1
      Parent directory
      2
      Catalog directory
      3
      Dockerfile generated by the opm generate dockerfile command
    3. Populate the catalog with the package definition for your Operator by running the opm init command:

      $ opm init <operator_name> \ 1
          --default-channel=preview \ 2
          --description=./README.md \ 3
          --icon=./operator-icon.svg \ 4
          --output yaml \ 5
          > <catalog_dir>/index.yaml 6
      1
      Operator, or package, name
      2
      Channel that subscriptions default to if unspecified
      3
      Path to the Operator’s README.md or other documentation
      4
      Path to the Operator’s icon
      5
      Output format: JSON or YAML
      6
      Path for creating the catalog configuration file

      This command generates an olm.package declarative config blob in the specified catalog configuration file.

  2. Add a bundle to the catalog by running the opm render command:

    $ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1
        --output=yaml \
        >> <catalog_dir>/index.yaml 2
    1
    Pull spec for the bundle image
    2
    Path to the catalog configuration file
    Note

    Channels must contain at least one bundle.

  3. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <catalog_dir>/index.yaml file:

    Example channel entry

    ---
    schema: olm.channel
    package: <operator_name>
    name: preview
    entries:
      - name: <operator_name>.v0.1.0 1

    1
    Ensure that you include the period (.) after <operator_name> but before the v in the version. Otherwise, the entry fails to pass the opm validate command.
  4. Validate the file-based catalog:

    1. Run the opm validate command against the catalog directory:

      $ opm validate <catalog_dir>
    2. Check that the error code is 0:

      $ echo $?

      Example output

      0

  5. Build the catalog image by running the podman build command:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the catalog image to a registry:

    1. If required, authenticate with your target registry by running the podman login command:

      $ podman login <registry>
    2. Push the catalog image by running the podman push command:

      $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>

Additional resources

4.4.2. Updating or filtering a file-based catalog image

You can use the opm CLI to update or filter a catalog image that uses the file-based catalog format. By extracting the contents of an existing catalog image, you can modify the catalog as needed, for example:

  • Adding packages
  • Removing packages
  • Updating existing package entries
  • Detailing deprecation messages per package, channel, and bundle

You can then rebuild the image as an updated version of the catalog.

Note

Alternatively, if you already have a catalog image on a mirror registry, you can use the oc-mirror CLI plugin to automatically prune any removed images from an updated source version of that catalog image while mirroring it to the target registry.

For more information about the oc-mirror plugin and this use case, see the "Keeping your mirror registry content updated" section, and specifically the "Pruning images" subsection, of "Mirroring images for a disconnected installation using the oc-mirror plugin".

Prerequisites

  • You have the following on your workstation:

    • The opm CLI.
    • podman version 1.9.3+.
    • A file-based catalog image.
    • A catalog directory structure recently initialized on your workstation related to this catalog.

      If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.

Procedure

  1. Extract the contents of the catalog image in YAML format to an index.yaml file in your catalog directory:

    $ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \
        -o yaml > <catalog_dir>/index.yaml
    Note

    Alternatively, you can use the -o json flag to output in JSON format.

  2. Modify the contents of the resulting index.yaml file to your specifications:

    Important

    After a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.

    • To add an Operator, follow the steps for creating package, bundle, and channel entries in the "Creating a file-based catalog image" procedure.
    • To remove an Operator, delete the set of olm.package, olm.channel, and olm.bundle blobs that relate to the package. The following example shows a set that must be deleted to remove the example-operator package from the catalog:

      Example 4.12. Example removed entries

      ---
      defaultChannel: release-2.7
      icon:
        base64data: <base64_string>
        mediatype: image/svg+xml
      name: example-operator
      schema: olm.package
      ---
      entries:
      - name: example-operator.v2.7.0
        skipRange: '>=2.6.0 <2.7.0'
      - name: example-operator.v2.7.1
        replaces: example-operator.v2.7.0
        skipRange: '>=2.6.0 <2.7.1'
      - name: example-operator.v2.7.2
        replaces: example-operator.v2.7.1
        skipRange: '>=2.6.0 <2.7.2'
      - name: example-operator.v2.7.3
        replaces: example-operator.v2.7.2
        skipRange: '>=2.6.0 <2.7.3'
      - name: example-operator.v2.7.4
        replaces: example-operator.v2.7.3
        skipRange: '>=2.6.0 <2.7.4'
      name: release-2.7
      package: example-operator
      schema: olm.channel
      ---
      image: example.com/example-inc/example-operator-bundle@sha256:<digest>
      name: example-operator.v2.7.0
      package: example-operator
      properties:
      - type: olm.gvk
        value:
          group: example-group.example.io
          kind: MyObject
          version: v1alpha1
      - type: olm.gvk
        value:
          group: example-group.example.io
          kind: MyOtherObject
          version: v1beta1
      - type: olm.package
        value:
          packageName: example-operator
          version: 2.7.0
      - type: olm.bundle.object
        value:
          data: <base64_string>
      - type: olm.bundle.object
        value:
          data: <base64_string>
      relatedImages:
      - image: example.com/example-inc/example-related-image@sha256:<digest>
        name: example-related-image
      schema: olm.bundle
      ---
    • To add or update deprecation messages for an Operator, ensure there is a deprecations.yaml file in the same directory as the package’s index.yaml file. For information on the deprecations.yaml file format, see "olm.deprecations schema".
  3. Save your changes.
  4. Validate the catalog:

    $ opm validate <catalog_dir>
  5. Rebuild the catalog:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the updated catalog image to a registry:

    $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>

Verification

  1. In the web console, navigate to the OperatorHub configuration resource in the Administration Cluster Settings Configuration page.
  2. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.

    For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.

  3. After the catalog source is in a READY state, navigate to the Operators OperatorHub page and check that the changes you made are reflected in the list of Operators.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.