Chapter 4. Catalogs
4.1. File-based catalogs
Operator Lifecycle Manager (OLM) v1 in OpenShift Container Platform supports file-based catalogs for discovering and sourcing cluster extensions, including Operators, on a cluster.
4.1.1. Highlights
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility.
- Editing
With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the
jq
CLI.This editability enables the following features and user-defined extensions:
- Promoting an existing bundle to a new channel
- Changing the default channel of a package
- Custom algorithms for adding, updating, and removing upgrade paths
- Composability
File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories:
catalogA
andcatalogB
. A catalog maintainer can create a new combined catalog by making a new directorycatalogC
and copyingcatalogA
andcatalogB
into it.This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these.
NoteDuplicate packages and duplicate bundles within a package are not permitted. The
opm validate
command returns an error if any duplicates are found.Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users.
- Extensibility
The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations.
For example, a tool could translate a high-level API, such as
(mode=semver)
, down to the low-level, file-based catalog format for upgrade paths. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria.While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well.
4.1.2. Directory structure
File-based catalogs can be stored and loaded from directory-based file systems. The opm
CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur.
Non-catalog files can be ignored using .indexignore
files, which have the same rules for patterns and precedence as .gitignore
files.
Example .indexignore
file
# Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml
Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package’s file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format.
Basic recommended structure
catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml
This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog’s root directory.
4.1.3. Schemas
File-based catalogs use a format, based on the CUE language specification, that can be extended with arbitrary schemas. The following _Meta
CUE schema defines the format that all file-based catalog blobs must adhere to:
_Meta
schema
_Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null }
No CUE schemas listed in this specification should be considered exhaustive. The opm validate
command has additional validations that are difficult or impossible to express concisely in CUE.
An Operator Lifecycle Manager (OLM) catalog currently uses three schemas (olm.package
, olm.channel
, and olm.bundle
), which correspond to OLM’s existing package and bundle concepts.
Each Operator package in a catalog requires exactly one olm.package
blob, at least one olm.channel
blob, and one or more olm.bundle
blobs.
All olm.*
schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own.
4.1.3.1. olm.package schema
The olm.package
schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon.
Example 4.1. olm.package
schema
#Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } }
4.1.3.2. olm.channel schema
The olm.channel
schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade paths for those bundles.
If a bundle entry represents an edge in multiple olm.channel
blobs, it can only appear once per channel.
It is valid for an entry’s replaces
value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads.
Example 4.2. olm.channel
schema
#Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" }
When using the skipRange
field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV
property of Subscription
objects.
You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange
and replaces
field. Ensure that the replaces
field points to the immediate previous version of the Operator version in question.
4.1.3.3. olm.bundle schema
Example 4.3. olm.bundle
schema
#Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" }
4.1.3.4. olm.deprecations schema
The optional olm.deprecations
schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog.
When this schema is defined, the OpenShift Container Platform web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub.
An olm.deprecations
schema entry contains one or more of the following reference
types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription
object.
Type | Scope | Status condition |
---|---|---|
| Represents the entire package |
|
| Represents one channel |
|
| Represents one bundle version |
|
Each reference
type has their own requirements, as detailed in the following example.
Example 4.4. Example olm.deprecations
schema with each reference
type
schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.
- 1
- Each deprecation schema must have a
package
value, and that package reference must be unique across the catalog. There must not be an associatedname
field. - 2
- The
olm.package
schema must not include aname
field, because it is determined by thepackage
field defined earlier in the schema. - 3
- All
message
fields, for anyreference
type, must be a non-zero length and represented as an opaque text blob. - 4
- The
name
field for theolm.channel
schema is required. - 5
- The
name
field for theolm.bundle
schema is required.
The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle.
Operator authors can save olm.deprecations
schema entries as a deprecations.yaml
file in the same directory as the package’s index.yaml
file:
Example directory structure for a catalog with deprecations
my-catalog └── my-operator ├── index.yaml └── deprecations.yaml
Additional resources
4.1.4. Properties
Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type
field is a string that effectively specifies the semantic and syntactic meaning of the value
field. The value can be any arbitrary JSON or YAML.
OLM defines a handful of property types, again using the reserved olm.*
prefix.
4.1.4.1. olm.package property
The olm.package
property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName
field must match the bundle’s first-class package
field, and the version
field must be a valid semantic version.
Example 4.5. olm.package
property
#PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } }
4.1.4.2. olm.gvk property
The olm.gvk
property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations.
Example 4.6. olm.gvk
property
#PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } }
4.1.4.3. olm.package.required
The olm.package.required
property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange
field must be a valid semantic version (semver) range.
Example 4.7. olm.package.required
property
#PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } }
4.1.4.4. olm.gvk.required
The olm.gvk.required
property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations.
Example 4.8. olm.gvk.required
property
#PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } }
4.1.5. Example catalog
With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog’s root directory.
There are many possible ways to build a file-based catalog; the following steps outline a simple approach:
Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog:
Example catalog configuration file
name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317
Run a script that parses the configuration file and creates a new catalog from its references:
Example script
name=$(yq eval '.name' catalog.yaml) mkdir "$name" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in $(yq e '.name as $catalog | .references[] | .image + "|" + $catalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=$(echo $l | cut -d'|' -f1) file=$(echo $l | cut -d'|' -f2) opm render "$image" > "$file" done opm generate dockerfile "$name" indexImage=$(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "$indexImage" -f "$name.Dockerfile" . docker push "$indexImage"
4.1.6. Guidelines
Consider the following guidelines when maintaining file-based catalogs.
4.1.6.1. Immutable bundles
The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable.
If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade path from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog.
However, there are some cases where a change in the catalog metadata is preferred:
-
Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another
olm.channel
blob. -
New upgrade paths: If you release a new
1.2.z
bundle version, for example1.2.4
, but1.3.0
is already released, you can update the catalog metadata for1.3.0
to skip1.2.4
.
4.1.6.2. Source control
Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps:
- Update the source-controlled catalog directory with a new commit.
-
Build and push the catalog image. Use a consistent tagging taxonomy, such as
:latest
or:<target_cluster_version>
, so that users can receive updates to a catalog as they become available.
4.1.7. CLI usage
For instructions about creating file-based catalogs by using the opm
CLI, see Managing custom catalogs.
For reference documentation about the opm
CLI commands related to managing file-based catalogs, see CLI tools.
4.1.8. Automation
Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks:
- Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package’s image reference.
-
Check that the catalog updates pass the
opm validate
command. - Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed.
- Automatically merge PRs that pass the previous checks.
- Automatically rebuild and republish the catalog image.
4.2. Red Hat-provided catalogs
Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default.
4.2.1. About Red Hat-provided Operator catalogs
The Red Hat-provided catalog sources are installed by default in the openshift-marketplace
namespace, which makes the catalogs available cluster-wide in all namespaces.
The following Operator catalogs are distributed by Red Hat:
Catalog | Index image | Description |
---|---|---|
|
| Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. |
|
| Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. |
|
| Certified software that can be purchased from Red Hat Marketplace. |
|
| Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. |
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image
field in the CatalogSource
object for the redhat-operators
catalog is updated from:
registry.redhat.io/redhat/redhat-operator-index:v4.8
to:
registry.redhat.io/redhat/redhat-operator-index:v4.9
4.3. Managing catalogs
Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
You can manage catalogs and extensions declaratively from the CLI by using custom resources (CRs).
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
4.3.1. About catalogs in OLM v1
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
Additional resources
4.3.2. Red Hat-provided Operator catalogs in OLM v1
Operator Lifecycle Manager (OLM) v1 includes the following Red Hat-provided Operator catalogs on the cluster by default. If you want to add an additional catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show the default catalogs installed on the cluster.
Red Hat Operators catalog
apiVersion: olm.operatorframework.io/v1
kind: ClusterCatalog
metadata:
name: openshift-redhat-operators
spec:
priority: -100
source:
image:
pollIntervalMinutes: <poll_interval_duration> 1
ref: registry.redhat.io/redhat/redhat-operator-index:v4.18
type: Image
- 1
- Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field.
Certified Operators catalog
apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-certified-operators spec: priority: -200 source: type: image image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/certified-operator-index:v4.18 type: Image
Red Hat Marketplace catalog
apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-redhat-marketplace spec: priority: -300 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/redhat-marketplace-index:v4.18 type: Image
Community Operators catalog
apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-community-operators spec: priority: -400 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/community-operator-index:v4.18 type: Image
The following command adds a catalog to your cluster:
Command syntax
$ oc apply -f <catalog_name>.yaml 1
- 1
- Specifies the catalog CR, such as
my-catalog.yaml
.
4.3.3. Adding a catalog to a cluster
To add a catalog to a cluster for Operator Lifecycle Manager (OLM) v1 usage, create a ClusterCatalog
custom resource (CR) and apply it to the cluster.
Procedure
Create a catalog custom resource (CR), similar to the following example:
Example
my-redhat-operators.yaml
fileapiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: my-redhat-operators 1 spec: priority: 1000 2 source: image: pollIntervalMinutes: 10 3 ref: registry.redhat.io/redhat/community-operator-index:v4.18 4 type: Image
- 1
- The catalog is automatically labeled with the value of the
metadata.name
field when it is applied to the cluster. For more information about labels and catalog selection, see "Catalog content resolution". - 2
- Optional: Specify the priority of the catalog in relation to the other catalogs on the cluster. For more information, see "Catalog selection by priority".
- 3
- Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field.
- 4
- Specify the catalog image in the
spec.source.image.ref
field.
Add the catalog to your cluster by running the following command:
$ oc apply -f my-redhat-operators.yaml
Example output
clustercatalog.olm.operatorframework.io/my-redhat-operators created
Verification
Run the following commands to verify the status of your catalog:
Check if you catalog is available by running the following command:
$ oc get clustercatalog
Example output
NAME LASTUNPACKED SERVING AGE my-redhat-operators 55s True 64s openshift-certified-operators 83m True 84m openshift-community-operators 43m True 84m openshift-redhat-marketplace 83m True 84m openshift-redhat-operators 54m True 84m
Check the status of your catalog by running the following command:
$ oc describe clustercatalog my-redhat-operators
Example output
Name: my-redhat-operators Namespace: Labels: olm.operatorframework.io/metadata.name=my-redhat-operators Annotations: <none> API Version: olm.operatorframework.io/v1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2025-02-18T20:28:50Z Finalizers: olm.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 50248 UID: 86adf94f-d2a8-4e70-895b-31139f2eeab7 Spec: Availability Mode: Available Priority: 1000 Source: Image: Poll Interval Minutes: 10 Ref: registry.redhat.io/redhat/community-operator-index:v4.18 Type: Image Status: 1 Conditions: Last Transition Time: 2025-02-18T20:29:00Z Message: Successfully unpacked and stored content from resolved source Observed Generation: 1 Reason: Succeeded 2 Status: True Type: Progressing Last Transition Time: 2025-02-18T20:29:00Z Message: Serving desired content from resolved source Observed Generation: 1 Reason: Available Status: True Type: Serving Last Unpacked: 2025-02-18T20:28:59Z Resolved Source: Image: Ref: registry.redhat.io/redhat/community-operator-index@sha256:11627ea6fdd06b8092df815076e03cae9b7cede8b353c0b461328842d02896c5 3 Type: Image Urls: Base: https://catalogd-service.openshift-catalogd.svc/catalogs/my-redhat-operators Events: <none>
4.3.4. Deleting a catalog
You can delete a catalog by deleting its custom resource (CR).
Prerequisites
- You have a catalog installed.
Procedure
Delete a catalog by running the following command:
$ oc delete clustercatalog <catalog_name>
Example output
clustercatalog.olm.operatorframework.io "my-redhat-operators" deleted
Verification
Verify the catalog is deleted by running the following command:
$ oc get clustercatalog
4.3.5. Disabling a default catalog
You can disable the Red Hat-provided catalogs that are included with OpenShift Container Platform by default.
Procedure
Disable a default catalog by running the following command:
$ oc patch clustercatalog openshift-certified-operators -p \ '{"spec": {"availabilityMode": "Unavailable"}}' --type=merge
Example output
clustercatalog.olm.operatorframework.io/openshift-certified-operators patched
Verification
Verify the catalog is disabled by running the following command:
$ oc get clustercatalog openshift-certified-operators
Example output
NAME LASTUNPACKED SERVING AGE openshift-certified-operators False 6h54m
4.4. Catalog content resolution
When you specify the cluster extension you want to install in a custom resource (CR), Operator Lifecycle Manager (OLM) v1 uses catalog selection to resolve what content is installed.
You can perform the following actions to control the selection of catalog content:
- Specify labels to select the catalog.
- Use match expressions to perform complex filtering across catalogs.
- Set catalog priority.
If you do not specify any catalog selection criteria, Operator Lifecycle Manager (OLM) v1 selects an extension from any available catalog on the cluster that provides the requested package.
During resolution, bundles that are not deprecated are preferred over deprecated bundles by default.