Chapter 23. Running the certification suite with Red Hat hosted pipeline
If you want to certify your operator with the Red Hat Hosted Pipeline you have to create a pull request for the Red Hat certification repository.
Choose this path if you are not interested in receiving comprehensive logs, or are not ready to include the tooling in your own CI/CD workflows.
Here’s an overview of the process:
Figure 23.1. Overview of Red Hat hosted pipeline

The process begins by submitting your Operator bundle through a GitHub pull request. Red Hat then runs the certification tests using an in-house OpenShift cluster. This path is similar to previous Operator bundle certification. You can see the certification test results both as comments on the pull request and within your Red Hat Partner Connect Operator bundle. If all the certification tests are successful, your Operator will be automatically merged and published to the Red Hat Container Catalog and the embedded OperatorHub in OpenShift.
Follow the instructions to certify your Operator with Red Hat hosted pipeline:
Prerequisites
- Complete the Product listing available on the Red Hat Partner Connect website.
- On the Red Hat Partner Connect website, go to Components tab.
- In the Authorized GitHub user accounts field, enter your GitHub username to the list of authorized GitHub users.
Procedure
Follow this procedure only if you want to run the Red Hat OpenShift Operator certification on the Red Hat hosted pipeline.
23.1. Forking the repository
- Log in to GitHub and fork the RedHat OpenShift operators upstream repository.
- Fork the appropriate repositories from the following table, depending on the Catalogs that you are targeting for distribution:
Catalog | Upstream Repository |
---|---|
Certified Catalog | https://github.com/redhat-openshift-ecosystem/certified-operators |
- Clone the forked certified-operators repository.
- Add the contents of your operator bundle to the operators directory available in your forked repository.
If you want to publish your operator bundle in multiple catalogs, you can fork each catalog and complete the certification once for each fork.
Additional resources
For more information about creating a fork in GitHub, see Fork a repo.
23.2. Adding your operator bundle
Add your operator bundle by using any of the two methods, depending on the workflow:
File-Based Catalog (FBC) workflow
If you are using the FBC process, continue with File-based Catalog (FBC).
Use File-Based Catalog workflow for all new operator certifications and to convert all the existing certified operators to a more scalable, template-driven format.
Classic (non-FBC) workflow If you are using the classic, directory-based process, proceed with:
23.2.1. If you have certified this operator before
Find your operator folder in the operators directory. Place the contents of your operator bundle in this directory.
Make sure your package name is consistent with the existing folder name for your operator.
23.2.2. If you are newly certifying this operator
If the newly certifying operator does not have a subdirectory already under the operator’s parent directory then you have to create one.
Create a new directory under operators. The name of this directory should match your operator’s package name. For example, my-operator
.
In this operators directory, create a new subdirectory with the name of your operator, for example,
<my-operator>
and create a version directory for example,<V1.0>
and place your bundle. The certification process preinstalls these directories for operators that were previously certified.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ├── operators └── my-operator └── v1.0
├── operators └── my-operator └── v1.0
-
Under the version directory, add a
manifests
folder containing all your OpenShift manifests including yourclusterserviceversion.yaml
file.
Recommended directory structure
The following example illustrates the recommended directory structure.
├── config.yaml ├── operators └── my-operator ├── v1.4.8 │ ├── manifests │ │ ├── cache.example.com_my-operators.yaml │ │ ├── my-operator-controller-manager-metrics-service_v1_service.yaml │ │ ├── my-operator-manager-config_v1_configmap.yaml │ │ ├── my-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml │ │ └── my-operator.clusterserviceversion.yaml │ └── metadata │ └── annotations.yaml └── ci.yaml
├── config.yaml
├── operators
└── my-operator
├── v1.4.8
│ ├── manifests
│ │ ├── cache.example.com_my-operators.yaml
│ │ ├── my-operator-controller-manager-metrics-service_v1_service.yaml
│ │ ├── my-operator-manager-config_v1_configmap.yaml
│ │ ├── my-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
│ │ └── my-operator.clusterserviceversion.yaml
│ └── metadata
│ └── annotations.yaml
└── ci.yaml
Configuration file | Description |
---|---|
config.yaml |
In this file include the organization of your operator. It can be |
ci.yaml | In this file include your Red Hat Technology Partner Component PID for this operator.
For example, |
annotations.yaml |
In this file include an annotation of OpenShift versions, which refers to the range of OpenShift versions . For example,
For example,
Note that the letter 'v' must be used before the version, and spaces are not allowed. The syntax is as follows:
|
Additional resources
- For more details, see Managing OpenShift Versions.
- For more information about building an operator bundle, see Working with bundle images.
- For an example of an operator bundle, see here.
23.3. File-Based Catalog
The File-Based Catalog (FBC) provides a plain text, declarative configuration format for packaging and managing operator upgrade paths on OpenShift clusters.
A key limitation of the older format is that operators could not modify their upgrade graphs after releasing a bundle. FBC addresses this by allowing changes to upgrade paths without requiring a new bundle release, offering more flexibility in certification and post-release updates.
23.3.1. Onboarding
You should convert your operator to the FBC format before submitting it through the certification workflow. The onboarding process uses automation to simplify the conversion from a bundle-based format to FBC.
Prerequisites
Before starting the onboarding procedure, ensure the following:
- You are in the correct operator directory:
cd <operator-repo>/operators/<operator-name>
cd <operator-repo>/operators/<operator-name>
The following dependencies are installed:
-
podman
-
make
-
You are authenticated to the required registries used by the Operator Lifecycle Manager (OLM). Use
podman login
and verify that registry credentials are stored in either:-
$(XDG_RUNTIME_DIR)/containers/auth.json
or -
~/.docker/config.json
-
Procedure
Download the
Makefile
to automate the onboarding process:Copy to Clipboard Copied! Toggle word wrap Toggle overflow wget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile
wget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile
Run the FBC onboarding command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow make fbc-onboarding
make fbc-onboarding
The script performs the following actions:
-
Downloads required tools
(opm, fbc-onboarding CLI)
- Fetches supported OpenShift catalogs
- Generates catalog templates
- Generates the FBC structure
-
Updates the
ci.yaml
configuration file
-
Downloads required tools
Verification Steps
After successful onboarding, you should see the following structure:
Operator directory:
tree .
$ tree .
operators/aqua
├── 0.0.1
...
├── catalog-templates
│ ├── v4.12.yaml
│ ├── v4.13.yaml
│ ├── v4.14.yaml
│ ├── v4.15.yaml
│ └── v4.16.yaml
├── ci.yaml
FBC catalog directory:
To view the catalog directory structure created by the FBC onboarding process, first navigate to the root of your forked repository:
cd ../..
$ cd ../..
Then run:
tree (repository root)/catalogs
$ tree (repository root)/catalogs
This will display the structure under the catalogs/
directory. For example:
catalogs ├── v4.12 │ └── aqua │ └── catalog.yaml ├── v4.13 │ └── aqua │ └── catalog.yaml ├── v4.14 │ └── aqua │ └── catalog.yaml ├── v4.15 │ └── aqua │ └── catalog.yaml └── v4.16 └── aqua └── catalog.yaml
catalogs
├── v4.12
│ └── aqua
│ └── catalog.yaml
├── v4.13
│ └── aqua
│ └── catalog.yaml
├── v4.14
│ └── aqua
│ └── catalog.yaml
├── v4.15
│ └── aqua
│ └── catalog.yaml
└── v4.16
└── aqua
└── catalog.yaml
23.3.2. Submitting FBC changes
After completing the onboarding and validation, add the generated resources to your Git repository and submit them through a pull request.
git add operators/aqua/{catalog-templates,ci.yaml,Makefile} git add catalogs/{v4.12,v4.13,v4.14,v4.15,v4.16}/aqua git commit --signoff -m "Add FBC resources for aqua operator"
$ git add operators/aqua/{catalog-templates,ci.yaml,Makefile}
$ git add catalogs/{v4.12,v4.13,v4.14,v4.15,v4.16}/aqua
$ git commit --signoff -m "Add FBC resources for aqua operator"
When you merge the pull request, the operator pipeline will validate and publish the FBC content to the appropriate OpenShift catalogs.
23.4. File-Based Catalog Workflow
To use the FBC workflow, first convert your existing non-FBC operator by using the onboarding process or start directly if you are certifying a new operator. The FBC workflow provides a modular, template-driven structure for managing OpenShift catalogs and supports automatic updates through configuration in the ci.yaml
file.
23.4.1. Enabling FBC in the workflow
To use the FBC workflow, update the ci.yaml
file in your operator repository by adding the following field:
fbc: enabled: true
fbc:
enabled: true
This tells the certification pipeline to use the FBC workflow to generate and update catalogs.
23.4.2. FBC templates
File-Based catalog templates provide a simplified, user-editable view of an OpenShift catalog. The Operator Package Manager (OPM) currently supports two types of templates. You can select the template that best fits your operator’s release strategy:
-
olm.template.basic
: A straightforward template for simple upgrade flows. -
olm.semver
: A more advanced template that supports automatic channel generation by using semantic versioning.
Red Hat recommends using olm.semver
for improved automation.
For details about the template schema, see Operator Framework documentation.
23.4.3. Mapping templates to OpenShift catalogs
To generate catalogs from templates, you need to provide a mapping between the template and the catalog in the ci.yaml
file. Depending on your needs, you can map a template either with 1:N mapping or 1:1 mapping
1:N mapping example
--- fbc: enabled: true catalog_mapping: - template_name: my-custom-semver-template.yaml # The name of the file inside ./catalog-templates directory catalogs_names: # a list of catalogs within the /catalogs directory - "v4.15" - "v4.16" - "v4.17" type: olm.semver - template_name: my-custom-basic-template.yaml # The name of the file inside catalog-templates directory catalogs_names: - "v4.12" - "v4.13" type: olm.template.basic
---
fbc:
enabled: true
catalog_mapping:
- template_name: my-custom-semver-template.yaml # The name of the file inside ./catalog-templates directory
catalogs_names: # a list of catalogs within the /catalogs directory
- "v4.15"
- "v4.16"
- "v4.17"
type: olm.semver
- template_name: my-custom-basic-template.yaml # The name of the file inside catalog-templates directory
catalogs_names:
- "v4.12"
- "v4.13"
type: olm.template.basic
1:1 mapping example
--- fbc: enabled: true catalog_mapping: - template_name: v4.14.yaml catalog_names: ["v4.14"] type: olm.template.basic - template_name: v4.15.yaml catalog_names: ["v4.15"] type: olm.template.basic - template_name: v4.16.yaml catalog_names: ["v4.16"] type: olm.template.basic - template_name: v4.17.yaml catalog_names: ["v4.17"] type: olm.template.basic
---
fbc:
enabled: true
catalog_mapping:
- template_name: v4.14.yaml
catalog_names: ["v4.14"]
type: olm.template.basic
- template_name: v4.15.yaml
catalog_names: ["v4.15"]
type: olm.template.basic
- template_name: v4.16.yaml
catalog_names: ["v4.16"]
type: olm.template.basic
- template_name: v4.17.yaml
catalog_names: ["v4.17"]
type: olm.template.basic
23.4.4. Generating catalogs from templates
Use the provided Makefile
to simplify catalog generation from templates. It automates the process by running the appropriate opm
commands for each template type.
If you followed the onboarding process for a new or converted operator, your operator repository should already include the Makefile
.
Make sure it is available in the root directory of your operator project:
. ├── 0.0.1/ │ ├── release-config.yaml │ ├── manifests/ │ └── metadata/ ├── catalog-templates/ ├── ci.yaml └── Makefile
.
├── 0.0.1/
│ ├── release-config.yaml
│ ├── manifests/
│ └── metadata/
├── catalog-templates/
├── ci.yaml
└── Makefile
Run the following command to generate FBC catalog files for each supported OpenShift version:
make catalogs
make catalogs
This command processes your catalog templates and creates structured catalog files in the catalogs/
directory. After generation, submit the updated catalog files through a pull request. When you merge the changes, the operator pipeline publishes the updates to the OpenShift Container Platform index.
Example output structure:
catalogs/ ├── v4.12/ │ └── aqua/ │ └── catalog.yaml ├── v4.13/ │ └── aqua/ │ └── catalog.yaml ├── v4.14/ │ └── aqua/ │ └── catalog.yaml ├── v4.15/ │ └── aqua/ │ └── catalog.yaml └── v4.16/ └── aqua/ └── catalog.yaml
catalogs/
├── v4.12/
│ └── aqua/
│ └── catalog.yaml
├── v4.13/
│ └── aqua/
│ └── catalog.yaml
├── v4.14/
│ └── aqua/
│ └── catalog.yaml
├── v4.15/
│ └── aqua/
│ └── catalog.yaml
└── v4.16/
└── aqua/
└── catalog.yaml
23.4.5. Adding a new bundle to the catalog
Add a new operator bundle to the catalog by using two methods:
- Automatic process
- Manual process
23.4.5.1. Using Automated Release
Automated release streamlines the bundle release and catalog update process.
Use this approach to reduce manual intervention and maintain consistency across operator versions. For details on enabling and using this feature, see File-based Catalog auto-release.
23.4.5.2. Manually adding bundle
Follow the steps to add operator bundles manually or by using a basic template:
- Submit the new operator version using the traditional pull request (PR) workflow.
- The operator pipeline builds, tests, and releases the bundle image to the registry.
-
After releasing the bundle, update your catalog templates with the new bundle image
pullspec
. You can find thispullspec
in the comment left by the pipeline on the PR. - Create a new PR that includes the catalog updates referencing the newly released bundle.
Manual updates require a two-step PR process—one to release the bundle and another to update the catalog templates.
Additional resources
- For guidance on authoring and editing catalog templates (such as SemVer or basic schemas), see OPM documentation.
23.4.6. Updating existing catalogs
One of the key advantages of the File-Based Catalog format is that it allows you to modify an operator’s update graph after releasing the bundle. This flexibility enables you to:
- Adjust the order of version updates
- Remove outdated or invalid bundles
- Apply other post-release changes without re-releasing a bundle
After updating the catalog templates, run make catalogs
to generate the updated catalog. Then, submit the generated files through the standard pull request workflow.
Submit all the catalog changes by using the standard pull request workflow for all the updates to take effect.
23.5. File-Based Catalog auto-release
Auto-release simplifies the FBC workflow by automating catalog updates after releasing a bundle. This reduces manual effort and helps ensure consistency across your catalog templates.
23.5.1. Overview
The standard FBC release workflow includes two steps:
- Build, test, and release the operator bundle
- Add the released bundle to the OpenShift Container Platform (OCP) catalog
You can now automate the second step by using the auto-release feature. Enable the auto-release feature to automatically update the catalog.
After you submit a pull request that includes both the new bundle and the release-config.yaml
file, the release pipeline takes care of the rest. It creates and merges a follow-up pull request that contains the necessary catalog changes. Here’s an example of an auto-generated catalog PR.".
23.5.2. Creating the release-config.yaml file
To enable the automatic release of your operator bundle to OpenShift Container Platform (OCP) catalogs in FBC mode, add a release-config.yaml
file in the corresponding bundle version directory. For example, operators/aqua/0.0.2/release-config.yaml
.
tree operators/aqua . ├── 0.0.2 │ ├── release-config.yaml # This is the file │ ├── manifests │ └── metadata ├── catalog-templates ├── ci.yaml └── Makefile
tree operators/aqua
.
├── 0.0.2
│ ├── release-config.yaml # This is the file
│ ├── manifests
│ └── metadata
├── catalog-templates
├── ci.yaml
└── Makefile
The release-config.yaml
file defines where the new bundle will be released—specifically, which catalog templates to update and how the bundle fits into the update graph.
Example
--- catalog_templates: - template_name: basic.yaml channels: [my-channel] replaces: aqua.0.0.1 - template_name: semver.yaml channels: [Fast, Stable]
---
catalog_templates:
- template_name: basic.yaml
channels: [my-channel]
replaces: aqua.0.0.1
- template_name: semver.yaml
channels: [Fast, Stable]
In this example:
-
The operator bundle is released to the
my-channel
channel in thebasic.yaml
template. -
The same bundle is also released to the
Fast
andStable
channels in thesemver.yaml
template. -
The
replaces
field (optional) specifies the earlier bundle version that the new bundle supersedes in the update graph.
23.5.3. File structure details
The structure of the release-config.yaml
file is validated automatically during the pipeline run. If the file does not conform to the schema, the pull request will fail with a detailed error message.
The file must use the following structure:
-
The top-level key is
catalog_templates
, which contains a list of catalog update definitions. Each item in the list represents one catalog template and must include:
-
template_name
: Name of the catalog template file located in thecatalog-templates/
directory. channels
: List of channels to which the bundle should be released.For SemVer templates, valid values are:
Fast
,Stable
, andCandidate
.-
replaces
(optional): Specifies which bundle the new one replaces in the update graph. This is valid only for the basic templates. -
skips
(optional): A list of bundles to skip in the update graph. This is valid only for the basic templates. -
skipRange
(optional): A range indicating which bundle versions should be skipped. This is valid only for the basic templates.
-
For more information, see release-config.yaml schema documentation.
23.6. Creating a Pull Request
The final step is to create a pull request for the targeted upstream repo.
Catalog | Upstream Repository |
---|---|
Certified Catalog | https://github.com/redhat-openshift-ecosystem/certified-operators |
If you want to publish your Operator bundle in multiple catalogs, you can create a pull request for each target catalog.
If you are not familiar with creating a pull request in GitHub you can find instructions here.
The title of your pull request must conform to the following format. operator my-operator (v1.4.8)
. It should begin with the word operator
followed by your Operator package name, followed by the version number in parenthesis.
When you create a pull request it triggers the Red Hat hosted pipeline and provides an update through a pull request comment whenever it has failed or completed.
23.6.1. Guidelines to follow
- You can re-trigger the Red Hat hosted pipeline by closing and reopening your pull request.
- You can only have one open pull request at a time for a given Operator version.
- Once a pull request has been successfully merged it can not be changed. You have to bump the version of your Operator and open a new pull request.
- You must use the package name of your Operator as the directory name that you created under operators. This package name should match the package annotation in the annotations.yaml file. This package name should also match the prefix of the clusterserviceversion.yaml filename.
- Your pull requests should only modify files in a single Operator version directory. Do not attempt to combine updates to multiple versions or updates across multiple Operators.
- The version indicator used to name your version directory should match the version indicator used in the title of the pull request.
- Image tags are not accepted for running the certification tests, only SHA digest are used. Replace all references to image tags with the corresponding SHA digest.