Chapter 23. Running the certification suite with Red Hat hosted pipeline


If you want to certify your operator with the Red Hat Hosted Pipeline you have to create a pull request for the Red Hat certification repository.

Choose this path if you are not interested in receiving comprehensive logs, or are not ready to include the tooling in your own CI/CD workflows.

Here’s an overview of the process:

Figure 23.1. Overview of Red Hat hosted pipeline

A flowchart that is a visual representation of running the certification test on Red Hat hosted pipeline

The process begins by submitting your Operator bundle through a GitHub pull request. Red Hat then runs the certification tests using an in-house OpenShift cluster. This path is similar to previous Operator bundle certification. You can see the certification test results both as comments on the pull request and within your Red Hat Partner Connect Operator bundle. If all the certification tests are successful, your Operator will be automatically merged and published to the Red Hat Container Catalog and the embedded OperatorHub in OpenShift.

Follow the instructions to certify your Operator with Red Hat hosted pipeline:

Prerequisites

  • Complete the Product listing available on the Red Hat Partner Connect website.
  • On the Red Hat Partner Connect website, go to Components tab.
  • In the Authorized GitHub user accounts field, enter your GitHub username to the list of authorized GitHub users.

Procedure

Note

Follow this procedure only if you want to run the Red Hat OpenShift Operator certification on the Red Hat hosted pipeline.

23.1. Forking the repository

  1. Log in to GitHub and fork the RedHat OpenShift operators upstream repository.
  2. Fork the appropriate repositories from the following table, depending on the Catalogs that you are targeting for distribution:
CatalogUpstream Repository

Certified Catalog

https://github.com/redhat-openshift-ecosystem/certified-operators

  1. Clone the forked certified-operators repository.
  2. Add the contents of your operator bundle to the operators directory available in your forked repository.

If you want to publish your operator bundle in multiple catalogs, you can fork each catalog and complete the certification once for each fork.

Additional resources

For more information about creating a fork in GitHub, see Fork a repo.

23.2. Adding your operator bundle

Add your operator bundle by using any of the two methods, depending on the workflow:

Note

Use File-Based Catalog workflow for all new operator certifications and to convert all the existing certified operators to a more scalable, template-driven format.

23.2.1. If you have certified this operator before

Find your operator folder in the operators directory. Place the contents of your operator bundle in this directory.

Note

Make sure your package name is consistent with the existing folder name for your operator.

23.2.2. If you are newly certifying this operator

If the newly certifying operator does not have a subdirectory already under the operator’s parent directory then you have to create one.

Create a new directory under operators. The name of this directory should match your operator’s package name. For example, my-operator.

  • In this operators directory, create a new subdirectory with the name of your operator, for example, <my-operator> and create a version directory for example, <V1.0> and place your bundle. The certification process preinstalls these directories for operators that were previously certified.

    Copy to Clipboard Toggle word wrap
    ├── operators
        └── my-operator
            └── v1.0
  • Under the version directory, add a manifests folder containing all your OpenShift manifests including your clusterserviceversion.yaml file.

Recommended directory structure

The following example illustrates the recommended directory structure.

Copy to Clipboard Toggle word wrap
├── config.yaml
├── operators
    └── my-operator
        ├── v1.4.8
        │   ├── manifests
        │   │   ├── cache.example.com_my-operators.yaml
        │   │   ├── my-operator-controller-manager-metrics-service_v1_service.yaml
        │   │   ├── my-operator-manager-config_v1_configmap.yaml
        │   │   ├── my-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
        │   │   └── my-operator.clusterserviceversion.yaml
        │   └── metadata
        │       └── annotations.yaml
        └── ci.yaml
Configuration fileDescription

config.yaml

In this file include the organization of your operator. It can be certified-operators. For example,organization: certified-operators

ci.yaml

In this file include your Red Hat Technology Partner Component PID for this operator.

For example, cert_project_id: <your component pid>. This file stores all the necessary metadata for a successful certification process.

annotations.yaml

In this file include an annotation of OpenShift versions, which refers to the range of OpenShift versions . For example, v4.8-v4.10 means versions 4.8 through 4.10. Add this to any existing content.

For example, # OpenShift annotations com.redhat.openshift.versions: v4.8-v4.10. The com.redhat.openshift.versions field, which is part of the metadata in the operator bundle, is used to determine whether an operator is included in the certified catalog for a given OpenShift version. You must use it to indicate one or more versions of OpenShift supported by your operator.

Note that the letter 'v' must be used before the version, and spaces are not allowed. The syntax is as follows:

  • A single version indicates that the operator is supported on that version of OpenShift or later. The operator is automatically added to the certified catalog for all subsequent OpenShift releases.
  • A single version preceded by '=' indicates that the operator is supported only on that specific version of OpenShift. For example, using =v4.8 will add the operator to the certified catalog for OpenShift 4.8, but not for later OpenShift releases.
  • A range can be used to indicate support only for OpenShift versions within that range. For example, using v4.8-v4.10 will add the operator to the certified catalog for OpenShift 4.8 through 4.10, but not for OpenShift 4.11 or 4.12.

Additional resources

23.3. File-Based Catalog

The File-Based Catalog (FBC) provides a plain text, declarative configuration format for packaging and managing operator upgrade paths on OpenShift clusters.

A key limitation of the older format is that operators could not modify their upgrade graphs after releasing a bundle. FBC addresses this by allowing changes to upgrade paths without requiring a new bundle release, offering more flexibility in certification and post-release updates.

23.3.1. Onboarding

You should convert your operator to the FBC format before submitting it through the certification workflow. The onboarding process uses automation to simplify the conversion from a bundle-based format to FBC.

Prerequisites

Before starting the onboarding procedure, ensure the following:

  • You are in the correct operator directory:
Copy to Clipboard Toggle word wrap
cd <operator-repo>/operators/<operator-name>
  • The following dependencies are installed:

    • podman
    • make
  • You are authenticated to the required registries used by the Operator Lifecycle Manager (OLM). Use podman login and verify that registry credentials are stored in either:

    • $(XDG_RUNTIME_DIR)/containers/auth.json or
    • ~/.docker/config.json

Procedure

  1. Download the Makefile to automate the onboarding process:

    Copy to Clipboard Toggle word wrap
    wget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile
  2. Run the FBC onboarding command:

    Copy to Clipboard Toggle word wrap
    make fbc-onboarding
  3. The script performs the following actions:

    1. Downloads required tools (opm, fbc-onboarding CLI)
    2. Fetches supported OpenShift catalogs
    3. Generates catalog templates
    4. Generates the FBC structure
    5. Updates the ci.yaml configuration file

Verification Steps

After successful onboarding, you should see the following structure:

Operator directory:

Copy to Clipboard Toggle word wrap
$ tree .

operators/aqua
├── 0.0.1
...
├── catalog-templates
│   ├── v4.12.yaml
│   ├── v4.13.yaml
│   ├── v4.14.yaml
│   ├── v4.15.yaml
│   └── v4.16.yaml
├── ci.yaml

FBC catalog directory:

To view the catalog directory structure created by the FBC onboarding process, first navigate to the root of your forked repository:

Copy to Clipboard Toggle word wrap
$ cd ../..

Then run:

Copy to Clipboard Toggle word wrap
$ tree (repository root)/catalogs

This will display the structure under the catalogs/ directory. For example:

Copy to Clipboard Toggle word wrap
catalogs
├── v4.12
│   └── aqua
│       └── catalog.yaml
├── v4.13
│   └── aqua
│       └── catalog.yaml
├── v4.14
│   └── aqua
│       └── catalog.yaml
├── v4.15
│   └── aqua
│       └── catalog.yaml
└── v4.16
    └── aqua
        └── catalog.yaml

23.3.2. Submitting FBC changes

After completing the onboarding and validation, add the generated resources to your Git repository and submit them through a pull request.

Copy to Clipboard Toggle word wrap
$ git add operators/aqua/{catalog-templates,ci.yaml,Makefile}

$ git add catalogs/{v4.12,v4.13,v4.14,v4.15,v4.16}/aqua

$ git commit --signoff -m "Add FBC resources for aqua operator"

When you merge the pull request, the operator pipeline will validate and publish the FBC content to the appropriate OpenShift catalogs.

23.4. File-Based Catalog Workflow

To use the FBC workflow, first convert your existing non-FBC operator by using the onboarding process or start directly if you are certifying a new operator. The FBC workflow provides a modular, template-driven structure for managing OpenShift catalogs and supports automatic updates through configuration in the ci.yaml file.

23.4.1. Enabling FBC in the workflow

To use the FBC workflow, update the ci.yaml file in your operator repository by adding the following field:

Copy to Clipboard Toggle word wrap
fbc:
  enabled: true

This tells the certification pipeline to use the FBC workflow to generate and update catalogs.

23.4.2. FBC templates

File-Based catalog templates provide a simplified, user-editable view of an OpenShift catalog. The Operator Package Manager (OPM) currently supports two types of templates. You can select the template that best fits your operator’s release strategy:

  • olm.template.basic: A straightforward template for simple upgrade flows.
  • olm.semver: A more advanced template that supports automatic channel generation by using semantic versioning.

Red Hat recommends using olm.semver for improved automation.

For details about the template schema, see Operator Framework documentation.

23.4.3. Mapping templates to OpenShift catalogs

To generate catalogs from templates, you need to provide a mapping between the template and the catalog in the ci.yaml file. Depending on your needs, you can map a template either with 1:N mapping or 1:1 mapping

1:N mapping example

Copy to Clipboard Toggle word wrap
---
fbc:
  enabled: true
  catalog_mapping:
    - template_name: my-custom-semver-template.yaml # The name of the file inside ./catalog-templates directory
      catalogs_names: # a list of catalogs within the /catalogs directory
        - "v4.15"
        - "v4.16"
        - "v4.17"
      type: olm.semver
    - template_name: my-custom-basic-template.yaml # The name of the file inside catalog-templates directory
      catalogs_names:
        - "v4.12"
        - "v4.13"
      type: olm.template.basic

1:1 mapping example

Copy to Clipboard Toggle word wrap
---
fbc:
  enabled: true
  catalog_mapping:
  - template_name: v4.14.yaml
    catalog_names: ["v4.14"]
    type: olm.template.basic
  - template_name: v4.15.yaml
    catalog_names: ["v4.15"]
    type: olm.template.basic
  - template_name: v4.16.yaml
    catalog_names: ["v4.16"]
    type: olm.template.basic
  - template_name: v4.17.yaml
    catalog_names: ["v4.17"]
    type: olm.template.basic

23.4.4. Generating catalogs from templates

Use the provided Makefile to simplify catalog generation from templates. It automates the process by running the appropriate opm commands for each template type.

If you followed the onboarding process for a new or converted operator, your operator repository should already include the Makefile.

Make sure it is available in the root directory of your operator project:

Copy to Clipboard Toggle word wrap
.
├── 0.0.1/
│   ├── release-config.yaml
│   ├── manifests/
│   └── metadata/
├── catalog-templates/
├── ci.yaml
└── Makefile

Run the following command to generate FBC catalog files for each supported OpenShift version:

Copy to Clipboard Toggle word wrap
make catalogs

This command processes your catalog templates and creates structured catalog files in the catalogs/ directory. After generation, submit the updated catalog files through a pull request. When you merge the changes, the operator pipeline publishes the updates to the OpenShift Container Platform index.

Example output structure:

Copy to Clipboard Toggle word wrap
catalogs/
├── v4.12/
│   └── aqua/
│       └── catalog.yaml
├── v4.13/
│   └── aqua/
│       └── catalog.yaml
├── v4.14/
│   └── aqua/
│       └── catalog.yaml
├── v4.15/
│   └── aqua/
│       └── catalog.yaml
└── v4.16/
    └── aqua/
        └── catalog.yaml

23.4.5. Adding a new bundle to the catalog

Add a new operator bundle to the catalog by using two methods:

  • Automatic process
  • Manual process

23.4.5.1. Using Automated Release

Automated release streamlines the bundle release and catalog update process.

Use this approach to reduce manual intervention and maintain consistency across operator versions. For details on enabling and using this feature, see File-based Catalog auto-release.

23.4.5.2. Manually adding bundle

Follow the steps to add operator bundles manually or by using a basic template:

  1. Submit the new operator version using the traditional pull request (PR) workflow.
  2. The operator pipeline builds, tests, and releases the bundle image to the registry.
  3. After releasing the bundle, update your catalog templates with the new bundle image pullspec. You can find this pullspec in the comment left by the pipeline on the PR.
  4. Create a new PR that includes the catalog updates referencing the newly released bundle.
Note

Manual updates require a two-step PR process—one to release the bundle and another to update the catalog templates.

Additional resources

  • For guidance on authoring and editing catalog templates (such as SemVer or basic schemas), see OPM documentation.

23.4.6. Updating existing catalogs

One of the key advantages of the File-Based Catalog format is that it allows you to modify an operator’s update graph after releasing the bundle. This flexibility enables you to:

  • Adjust the order of version updates
  • Remove outdated or invalid bundles
  • Apply other post-release changes without re-releasing a bundle

After updating the catalog templates, run make catalogs to generate the updated catalog. Then, submit the generated files through the standard pull request workflow.

Note

Submit all the catalog changes by using the standard pull request workflow for all the updates to take effect.

23.5. File-Based Catalog auto-release

Auto-release simplifies the FBC workflow by automating catalog updates after releasing a bundle. This reduces manual effort and helps ensure consistency across your catalog templates.

23.5.1. Overview

The standard FBC release workflow includes two steps:

  1. Build, test, and release the operator bundle
  2. Add the released bundle to the OpenShift Container Platform (OCP) catalog

You can now automate the second step by using the auto-release feature. Enable the auto-release feature to automatically update the catalog.

After you submit a pull request that includes both the new bundle and the release-config.yaml file, the release pipeline takes care of the rest. It creates and merges a follow-up pull request that contains the necessary catalog changes. Here’s an example of an auto-generated catalog PR.".

23.5.2. Creating the release-config.yaml file

To enable the automatic release of your operator bundle to OpenShift Container Platform (OCP) catalogs in FBC mode, add a release-config.yaml file in the corresponding bundle version directory. For example, operators/aqua/0.0.2/release-config.yaml.

Copy to Clipboard Toggle word wrap
tree operators/aqua
.
├── 0.0.2
│   ├── release-config.yaml # This is the file
│   ├── manifests
│   └── metadata
├── catalog-templates
├── ci.yaml
└── Makefile

The release-config.yaml file defines where the new bundle will be released—specifically, which catalog templates to update and how the bundle fits into the update graph.

Example

Copy to Clipboard Toggle word wrap
---
catalog_templates:
  - template_name: basic.yaml
    channels: [my-channel]
    replaces: aqua.0.0.1
  - template_name: semver.yaml
    channels: [Fast, Stable]

In this example:

  • The operator bundle is released to the my-channel channel in the basic.yaml template.
  • The same bundle is also released to the Fast and Stable channels in the semver.yaml template.
  • The replaces field (optional) specifies the earlier bundle version that the new bundle supersedes in the update graph.

23.5.3. File structure details

The structure of the release-config.yaml file is validated automatically during the pipeline run. If the file does not conform to the schema, the pull request will fail with a detailed error message.

The file must use the following structure:

  • The top-level key is catalog_templates, which contains a list of catalog update definitions.
  • Each item in the list represents one catalog template and must include:

    • template_name: Name of the catalog template file located in the catalog-templates/ directory.
    • channels: List of channels to which the bundle should be released.

      For SemVer templates, valid values are: Fast, Stable, and Candidate.

    • replaces (optional): Specifies which bundle the new one replaces in the update graph. This is valid only for the basic templates.
    • skips (optional): A list of bundles to skip in the update graph. This is valid only for the basic templates.
    • skipRange (optional): A range indicating which bundle versions should be skipped. This is valid only for the basic templates.

For more information, see release-config.yaml schema documentation.

23.6. Creating a Pull Request

The final step is to create a pull request for the targeted upstream repo.

CatalogUpstream Repository

Certified Catalog

https://github.com/redhat-openshift-ecosystem/certified-operators

If you want to publish your Operator bundle in multiple catalogs, you can create a pull request for each target catalog.

If you are not familiar with creating a pull request in GitHub you can find instructions here.

Note

The title of your pull request must conform to the following format. operator my-operator (v1.4.8). It should begin with the word operator followed by your Operator package name, followed by the version number in parenthesis.
When you create a pull request it triggers the Red Hat hosted pipeline and provides an update through a pull request comment whenever it has failed or completed.

23.6.1. Guidelines to follow

  • You can re-trigger the Red Hat hosted pipeline by closing and reopening your pull request.
  • You can only have one open pull request at a time for a given Operator version.
  • Once a pull request has been successfully merged it can not be changed. You have to bump the version of your Operator and open a new pull request.
  • You must use the package name of your Operator as the directory name that you created under operators. This package name should match the package annotation in the annotations.yaml file. This package name should also match the prefix of the clusterserviceversion.yaml filename.
  • Your pull requests should only modify files in a single Operator version directory. Do not attempt to combine updates to multiple versions or updates across multiple Operators.
  • The version indicator used to name your version directory should match the version indicator used in the title of the pull request.
  • Image tags are not accepted for running the certification tests, only SHA digest are used. Replace all references to image tags with the corresponding SHA digest.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.