Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 22. Running the certification test suite locally
By selecting this option, you can run the certification tooling on your own OpenShift cluster.
Red Hat recommends you to follow this method to certify your operators.
This option is an advanced method for partners who:
- are interested in integrating the tooling into their own developer workflows for continuous verification,
- want access to comprehensive logs for a faster feedback loop,
- or have dependencies that are not available in a default OpenShift installation.
Here’s an overview of the process:
Figure 22.1. Overview of running the certification test suite locally
You use OpenShift pipelines based on Tekton, to run the certification tests, enabling the viewing of comprehensive logs and debugging information in real time. Once you are ready to certify and publish your operator bundle, the pipeline submits a pull request (PR) to GitHub on your behalf. If everything passes successfully, your operator is automatically merged and published in the Red Hat Container Catalog and the embedded operatorHub in OpenShift.
Follow the instructions to run the certification test suite locally:
Prerequisites
To certify your software product on Red Hat OpenShift test environment, ensure to have:
- The OpenShift cluster version 4.8 or later is installed.
The OpenShift Operator Pipeline creates a persistent volume claim for a 5GB volume. If you are running an OpenShift cluster on bare metal, ensure you have configured dynamic volume provisioning. If you do not have dynamic volume provisioning configured, consider setting up a local volume. To prevent from getting Permission Denied
errors, modify the local volume storage path to have the container_file_t
SELinux label, by using the following command:
chcon -Rv -t container_file_t "storage_path(/.*)?"
chcon -Rv -t container_file_t "storage_path(/.*)?"
- You have the kubeconfig file for an admin user that has cluster admin privileges.
- You have a valid operator bundle.
- The OpenShift CLI tool (oc) version 4.7.13 or later is installed.
- The Git CLI tool (git) version 2.32.0 or later is installed.
- The Tekton CLI tool (tkn) version 0.19.1 or later is installed.
22.1. Adding your operator bundle Copier lienLien copié sur presse-papiers!
In the operators directory of your fork, there are a series of subdirectories.
If you want to ship your operator by using the File-Based Catalog (FBC) workflow, see File-based Catalog (FBC).
22.1.1. If you have certified this operator before - Copier lienLien copié sur presse-papiers!
Find your operator folder in the operators directory. Place the contents of your operator bundle in this directory.
Make sure your package name is consistent with the existing folder name for your operator.
22.1.2. If you are newly certifying this operator - Copier lienLien copié sur presse-papiers!
If the newly certifying operator does not have a subdirectory already under the operator’s parent directory then you have to create one.
Create a new directory under operators. The name of this directory should match your operator’s package name. For example, my-operator
.
In this operators directory, create a new subdirectory with the name of your operator, for example,
<my-operator>
and create a version directory for example,<V1.0>
and place your bundle. The certification process preinstalls these directories for operators that were previously certified.├── operators └── my-operator └── v1.0
├── operators └── my-operator └── v1.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Under the version directory, add a
manifests
folder containing all your OpenShift manifests including yourclusterserviceversion.yaml
file.
Recommended directory structure
The following example illustrates the recommended directory structure.
Configuration file | Description |
---|---|
config.yaml |
In this file include the organization of your operator. It can be |
ci.yaml | In this file include your Red Hat Technology Partner Component PID for this operator.
For example, |
annotations.yaml |
In this file include an annotation of OpenShift versions, which refers to the range of OpenShift versions . For example,
For example,
Note that the letter 'v' must be used before the version, and spaces are not allowed. The syntax is as follows:
|
22.2. Forking the repository Copier lienLien copié sur presse-papiers!
- Log in to GitHub and fork the RedHat OpenShift operators upstream repository.
- Fork the appropriate repositories from the following table, depending on the Catalogs that you are targeting for distribution:
Catalog | Upstream Repository |
---|---|
Certified Catalog | https://github.com/redhat-openshift-ecosystem/certified-operators |
- Clone the forked certified-operators repository.
- Add the contents of your operator bundle to the operators directory available in your forked repository.
If you want to publish your operator bundle in multiple catalogs, you can fork each catalog and complete the certification once for each fork.
22.3. Installing the OpenShift Operator Pipeline Copier lienLien copié sur presse-papiers!
Prerequisites
Administrator privileges on your OpenShift cluster.
Procedure
You can install the OpenShift Operator Pipeline by two methods:
- Automated process (Red Hat recommended process)
- Manual process
22.3.1. Automated process Copier lienLien copié sur presse-papiers!
Red Hat recommends using the automated process for installing the OpenShift Operator Pipeline. The automated process ensures the cluster is properly configured before executing the CI Pipeline. This process installs an operator to the cluster that helps you to automatically update all the CI Pipeline tasks without requiring any manual intervention. This process also supports multitenant scenarios in which you can test many operators iteratively within the same cluster.
Follow these steps to install the OpenShift Operator Pipeline through an Operator:
Keep the source files of your Operator bundle ready before installing the Operator Pipeline.
22.3.1.1. Prerequisites Copier lienLien copié sur presse-papiers!
Before installing the OpenShift Operator Pipeline, in a terminal window run the following commands, to configure all the prerequisites:
The Operator watches all the namespaces. Hence, if secrets/configs/etc
already exist in another namespace, you can use the existing namespace for installing the Operator Pipeline.
Create a new namespace:
oc new-project oco
oc new-project oco
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
kubeconfig
environment variable:export KUBECONFIG=/path/to/your/cluster/kubeconfig
export KUBECONFIG=/path/to/your/cluster/kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis
kubeconfig
variable is used to deploy the Operator under test and run the certification checks.oc create secret generic kubeconfig --from-file=kubeconfig=$KUBECONFIG
oc create secret generic kubeconfig --from-file=kubeconfig=$KUBECONFIG
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands for submitting the certification results:
Add the github API token to the repository where the pull request will be created:
oc create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>
oc create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add RedHat Container API access key:
oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >
oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This API access key is specifically related to your unique partner account on the Red Hat Partner Connect portal.
Prerequisites for running OpenShift cluster on bare metal:
If you are running an OpenShift cluster on bare metal, the Operator pipeline requires a 5Gi persistent volume to run. The following yaml template helps you to create a 5Gi persistent volume by using local storage.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CI pipeline automatically builds your operator bundle image and bundle image index for testing and verification. By default, the pipeline creates images in the OpenShift container registry on the cluster.
To use this registry on bare metal, set up the internal image registry before running the pipeline. For detailed instructions on setting up the internal image registry, see Image registry storage configuration.
If you want to use an external private registry then provide your access credentials to the cluster by adding a secret. For detailed instructions, see Using a private container registry.
22.3.1.2. Installing the pipeline through an Operator Copier lienLien copié sur presse-papiers!
Follow these steps to add the Operator to your cluster:
Install the Operator Certification Operator.
- Log in to your OpenShift cluster console.
-
From the main menu, navigate to Operators
OperatorHub. - Type Operator Certification Operator in the All Items - Filter by keyword filter/search box.
- Select Operator Certification Operator tile when it displays. The Operator Certification Operator page displays.
- Click Install. The Install Operator web page displays.
- Scroll down and click Install.
- Click View Operator, to verify the installation.
Apply Custom Resource for the newly installed Operator Pipeline.
- Log in to your OpenShift Cluster Console.
- From the Projects drop-down menu, select the project for which you wish to apply the Custom Resource.
Expand Operator Pipeline and then click Create instance.
The Create Operator Pipeline screen is auto-populated with the default values.
NoteYou need not change any of the default values if you have created all the resource names according to the prerequisites.
- Click Create.
The Custom Resource is created and the Operator starts reconciling.
Verification Steps
Check the Conditions of the Custom Resource.
- Log in to your OpenShift cluster console.
- Navigate to the project for which you have newly created the Operator Pipeline Custom Resource and click the Custom Resource.
- Scroll down to the Conditions section and check if all the Status values are set to True.
If a resource fails reconciliation, check the Message section to identify the next steps to fix the error.
Check the Operator logs.
In a terminal window run the following command:
oc get pods -n openshift-marketplace
oc get pods -n openshift-marketplace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Record the full podman name of the
certification-operator-controller-manager
pod and run the command:oc get logs -f -n openshift-operators <pod name> manager
oc get logs -f -n openshift-operators <pod name> manager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check if the reconcillation of the Operator has occurred.
22.3.1.3. Executing the pipeline Copier lienLien copié sur presse-papiers!
For executing the pipeline, ensure you have workspace-template.yml
file in a templates folder in the directory, from where you want to run the tkn
commands.
To create a workspace-template.yml
file, in a terminal window run the following command:
You can run the pipeline through different methods.
22.3.2. Manual process Copier lienLien copié sur presse-papiers!
Follow these steps to manually install the OpenShift Operator Pipeline:
22.3.2.1. Installing the OpenShift Pipeline Operator Copier lienLien copié sur presse-papiers!
- Log in to your OpenShift cluster console.
- From the main menu, navigate to Operators > OperatorHub.
- Type OpenShift Pipelines in the All Items - Filter by keyword filter/search box.
- Select Red Hat OpenShift Pipelines tile when it displays. The Red Hat OpenShift Pipelines page displays.
- Click Install. The Install Operator web page displays.
- Scroll down and click Install.
22.3.2.2. Configuring the OpenShift (oc) CLI tool Copier lienLien copié sur presse-papiers!
A file that is used to configure access to a cluster is called a kubeconfig file. This is a generic way of referring to configuration files. Use kubeconfig files to organize information about clusters, users, namespaces, and authentication mechanisms.
The kubectl
command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster.
- In a terminal window, set the KUBECONFIG environment variable:
export KUBECONFIG=/path/to/your/cluster/kubeconfig
export KUBECONFIG=/path/to/your/cluster/kubeconfig
The kubeconfig
file deploys the Operator under test and runs the certification checks.
22.3.2.3. Creating an OpenShift Project Copier lienLien copié sur presse-papiers!
Create a new namespace to start your work on the pipeline.
To create a namespace, in a terminal window run the following command:
oc adm new-project <my-project-name> # create the project oc project <my-project-name> # switch into the project
oc adm new-project <my-project-name> # create the project
oc project <my-project-name> # switch into the project
Do not run the pipeline in the default project or namespace. Red Hat recommends creating a new project for the pipeline.
22.3.2.4. Adding the kubeconfig secret Copier lienLien copié sur presse-papiers!
Create a kubernetes secret containing your kubeconfig for authentication to the cluster running the certification pipeline. The certification pipeline requires kubeconfig to execute a test deployment of your Operator on the OpenShift cluster.
To add the kubeconfig secret, in a terminal window run the following command:
oc create secret generic kubeconfig --from-file=kubeconfig=$KUBECONFIG
oc create secret generic kubeconfig --from-file=kubeconfig=$KUBECONFIG
22.3.2.5. Importing Operator from Red Hat Catalog Copier lienLien copié sur presse-papiers!
Import Operators from the Red Hat catalog.
In a terminal window, run the following commands:
Before running the commands, identify the major and minor versions of your OpenShift cluster (for example, OPENSHIFT_VERSION=v4.19)
If you are using OpenShift on IBM Power cluster for ppc64le architecture, run the following command to avoid permission issues:
oc adm policy add-scc-to-user anyuid -z pipeline
This command grants the anyuid security context constraints (SCC) to the default pipeline service account.
22.3.2.6. Installing the certification pipeline dependencies Copier lienLien copié sur presse-papiers!
In a terminal window, install the certification pipeline dependencies on your cluster using the following commands:
$git clone https://github.com/redhat-openshift-ecosystem/operator-pipelines $cd operator-pipelines $oc apply -R -f ansible/roles/operator-pipeline/templates/openshift/pipelines $oc apply -R -f ansible/roles/operator-pipeline/templates/openshift/tasks
$git clone https://github.com/redhat-openshift-ecosystem/operator-pipelines
$cd operator-pipelines
$oc apply -R -f ansible/roles/operator-pipeline/templates/openshift/pipelines
$oc apply -R -f ansible/roles/operator-pipeline/templates/openshift/tasks
22.3.2.7. Configuring the repository for submitting the certification results Copier lienLien copié sur presse-papiers!
In a terminal window, run the following commands to configure your repository for submitting the certification results:
22.3.2.7.1. Adding GitHub API Token Copier lienLien copié sur presse-papiers!
After performing all the configurations, the pipeline can automatically open a pull request to submit your Operator to Red Hat.
To enable this functionally, add a GitHub API Token and use --param submit=true
when running the pipeline:
oc create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>
oc create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>
22.3.2.7.2. Adding Red Hat Container API access key Copier lienLien copié sur presse-papiers!
Add the specific container API access key that you receive from Red Hat:
oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >
oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >
22.3.2.7.3. Enabling digest pinning Copier lienLien copié sur presse-papiers!
This step is mandatory to submit the certification results to Red Hat.
The OpenShift Operator pipeline can automatically replace all the image tags in your bundle with image Digest SHAs. This allows the pipeline to ensure if it is using a pinned version of all the images. The pipeline commits the pinned version of your bundle to your GitHub repository as a new branch.
To enable this functionality, add a private key having access to GitHub to your cluster as a secret.
Use Base64 to encode a private key which has access to the GitHub repository containing the bundle.
base64 /path/to/private/key
base64 /path/to/private/key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret that contains the base64 encoded private key.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the secret to the cluster.
oc create -f ssh-secret.yml
oc create -f ssh-secret.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
22.3.2.7.4. Using a private container registry Copier lienLien copié sur presse-papiers!
The pipeline automatically builds your Operator bundle image and bundle image index for testing and verification. By default, the pipeline creates images in the OpenShift Container Registry on the cluster. If you want to use an external private registry then you have to provide credentials by adding a secret to the cluster.
oc create secret docker-registry registry-dockerconfig-secret \ --docker-server=quay.io \ --docker-username=<registry username> \ --docker-password=<registry password> \ --docker-email=<registry email>
oc create secret docker-registry registry-dockerconfig-secret \
--docker-server=quay.io \
--docker-username=<registry username> \
--docker-password=<registry password> \
--docker-email=<registry email>
22.4. Execute the OpenShift Operator pipeline Copier lienLien copié sur presse-papiers!
You can run the OpenShift Operator pipeline through the following methods.
From the following examples, remove or add parameters and workspaces according to your requirements.
If you are using Red Hat OpenShift Local, formerly known as Red Hat CodeReady Containers (CRC), or Red Hat OpenShift on IBM Power for ppc64le architecture, pass the following tekton CLI argument to every ci pipeline command to avoid permission issues:
--pod-template templates/crc-pod-template.yml
Troubleshooting
If your OpenShift Pipelines operator 1.9 or later doesn’t work, follow the procedure to fix it:
Prerequisites
Ensure that you have administrator privileges for your cluster before creating a custom security context constraint (SCC).
Procedure
For OpenShift Pipelines operator 1.9 or later to work and to execute a subset of tasks in the ci-pipeline that requires privilege escalation, create a custom security context constraint (SCC) and link it to the pipeline service account by using the following commands:
To create a new SCC:
oc apply -f ansible/roles/operator-pipeline/templates/openshift/openshift-pipelines-custom-scc.yml
oc apply -f ansible/roles/operator-pipeline/templates/openshift/openshift-pipelines-custom-scc.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To add the new SCC to a ci-pipeline service account:
oc adm policy add-scc-to-user pipelines-custom-scc -z pipeline
oc adm policy add-scc-to-user pipelines-custom-scc -z pipeline
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
22.4.1. Running the Minimal pipeline Copier lienLien copié sur presse-papiers!
Procedure
In a terminal window, run the following commands:
After running the command, the pipeline prompts you to provide additional parameters. Accept all the default values to finish executing the pipeline.
The following is set as default and doesn’t need to be explicitly included, but can be overridden if your kubeconfig secret is created under a different name.
--param kubeconfig_secret_name=kubeconfig \ --param kubeconfig_secret_key=kubeconfig
--param kubeconfig_secret_name=kubeconfig \
--param kubeconfig_secret_key=kubeconfig
If you are running the ci pipeline on ppc64le and s390x architecture, edit the value of the parameter param pipeline_image
from the default value quay.io/redhat-isv/operator-pipelines-images:released
to quay.io/redhat-isv/operator-pipelines-images:multi-arch
.
Troubleshooting
If you get a Permission Denied
error when you are using the SSH URL, try the GITHUB HTTPS URL.
22.4.2. Running the pipeline with image digest pinning Copier lienLien copié sur presse-papiers!
Prerequisites
Execute the instructions Enabling digest pinning.
Procedure
In a terminal window, run the following commands:
Troubleshooting
When you get an error - could not read Username for https://github.com
, provide the SSH github URL for --param git_repo_url
.
22.4.3. Running the pipeline with a private container registry Copier lienLien copié sur presse-papiers!
Prerequisites
Execute the instructions included under By using a private container registry.
Procedure
In a terminal window, run the following commands:
22.5. Submit certification results Copier lienLien copié sur presse-papiers!
Following procedure helps you to submit the certification test results to Red Hat.
Prerequisites
- Execute the instructions Configuring the repository for submitting the certification results.
Add the following parameters to the GitHub upstream repository from where you want to submit the pull request for Red Hat certification. It is the Red Hat certification repository by default, but you can use your own repository for testing.
-param upstream_repo_name=$UPSTREAM_REPO_NAME #Repo where Pull Request (PR) will be opened --param submit=true
-param upstream_repo_name=$UPSTREAM_REPO_NAME #Repo where Pull Request (PR) will be opened --param submit=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following is set as default and doesn’t need to be explicitly included, but can be overridden if your Pyxis secret is created under a different name.
--param pyxis_api_key_secret_name=pyxis-api-secret \ --param pyxis_api_key_secret_key=pyxis_api_key
--param pyxis_api_key_secret_name=pyxis-api-secret \ --param pyxis_api_key_secret_key=pyxis_api_key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
You can submit the Red Hat certification test results for four different scenarios:
22.5.1. Submitting test results from the minimal pipeline Copier lienLien copié sur presse-papiers!
Procedure
In a terminal window, execute the following commands:
22.5.2. Submitting test results with image digest pinning Copier lienLien copié sur presse-papiers!
In a terminal window, execute the following commands:
Prerequisites
Execute the instructions included for:
Procedure
Troubleshooting
When you get an error - could not read Username for https://github.com
, provide the SSH github URL for --param git_repo_url
.
22.5.3. Submitting test results from a private container registry Copier lienLien copié sur presse-papiers!
In a terminal window, execute the following commands:
Prerequisites
Execute the instructions included for:
Procedure
22.5.4. Submitting test results with image digest pinning and from a private container registry Copier lienLien copié sur presse-papiers!
In a terminal window, execute the following commands:
Prerequisites
Execute the instructions included for:
Procedure
After a successful certification, the certified product gets listed on Red Hat Ecosystem Catalog.
Certified operators are listed in and consumed by customers through the embedded OpenShift operatorHub, providing them the ability to easily deploy and run your solution. Additionally, your product and operator image will be listed on the Red Hat Ecosystem Catalog.