Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Cluster extensions
5.1. Supported extensions Copia collegamentoCollegamento copiato negli appunti!
To install an Operator as a cluster extension, it must meet bundle format, install mode, and dependency requirements. Operator Lifecycle Manager (OLM) v1 supports extensions that use webhooks for validation, mutation, or conversion.
Operator Lifecycle Manager (OLM) v1 supports extensions that use the AllNamespaces install mode. With this mode, the Operator watches and manages resources across all namespaces in the cluster.
As a Technology Preview feature, you can configure an extension to watch a specific namespace. This limits watching to one namespace instead of the entire cluster.
5.1.1. Supported bundle formats and dependencies Copia collegamentoCollegamento copiato negli appunti!
To install an Operator as a cluster extension, the Operator must be packaged using the registry+v1 bundle format. OLM v1 does not support Operators that declare dependencies by using file-based catalog properties.
To install an Operator as a cluster extension, it must meet the following criteria:
-
The Operator is packaged using the
registry+v1bundle format. The Operator does not declare dependencies by using the following file-based catalog properties:
-
olm.gvk.required -
olm.package.required -
olm.constraint
-
OLM v1 verifies that an Operator meets these requirements during installation. If an Operator does not meet these criteria, OLM v1 reports the issue in the cluster extension status conditions.
Operator Lifecycle Manager (OLM) v1 does not support the OperatorConditions API introduced in OLM (Classic).
If an extension relies on only the OperatorConditions API to manage updates, the extension might not install correctly. Most extensions that rely on this API fail at start time, but some might fail during reconciliation.
As a workaround, you can pin your extension to a specific version. When you want to update your extension, consult the extension’s documentation to find out when it is safe to pin the extension to a new version.
5.1.2. Webhook support Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) v1 supports Operators that use webhooks for validation, mutation, or conversion. Operators use webhooks to enforce security policies or inject configurations into resources.
The OpenShift Service CA Operator automatically manages webhook certificates. When you install an Operator that includes webhooks, the OpenShift Service CA Operator completes the following actions:
- Applies Service CA annotations to webhook configurations and services.
- Generates TLS certificates in the namespace where you install the cluster extension.
- Mounts certificate secrets to the Operator deployment.
- Configures webhook services with proper TLS settings.
5.2. Managing cluster extensions Copia collegamentoCollegamento copiato negli appunti!
You use catalogs to access the versions, patches, and over-the-air updates for extensions and Operators. You use custom resources (CRs) to manage extensions declaratively from the CLI.
For OpenShift Container Platform 4.21, documented procedures for OLM v1 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing Software Catalog and Installed Operators pages do not yet display OLM v1 components.
5.2.1. Finding Operators to install from a catalog Copia collegamentoCollegamento copiato negli appunti!
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.
Currently in Operator Lifecycle Manager (OLM) v1, you cannot query on-cluster catalogs managed by catalogd. In OLM v1, you must use the opm and jq CLI tools to query the catalog registry.
Prerequisites
- You have added a catalog to your cluster.
-
You have installed the
jqCLI tool. -
You have installed the
opmCLI tool.
Procedure
To return a list of extensions that support the
AllNamespacesinstall mode and do not use webhooks, enter the following command:$ opm render <catalog_registry_url>:<tag> \ | jq -cs '[.[] | select(.schema == "olm.bundle" \ and (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) | .package] | unique[]'where:
catalog_registry_url-
Specifies the URL of the catalog registry, such as
registry.redhat.io/redhat/redhat-operator-index. tagSpecifies the tag or version of the catalog, such as
v4.21orlatest.Example 5.1. Example command
$ opm render \ registry.redhat.io/redhat/redhat-operator-index:v4.21 \ | jq -cs '[.[] | select(.schema == "olm.bundle" \ and (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) | .package] | unique[]'Example 5.2. Example output
"3scale-operator" "amq-broker-rhel8" "amq-online" "amq-streams" "amq-streams-console" "ansible-automation-platform-operator" "ansible-cloud-addons-operator" "apicast-operator" "authorino-operator" "aws-load-balancer-operator" "bamoe-kogito-operator" "cephcsi-operator" "cincinnati-operator" "cluster-logging" "cluster-observability-operator" "compliance-operator" "container-security-operator" "cryostat-operator" "datagrid" "devspaces" ...
Inspect the contents of an extension’s metadata by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.package") \ | select( .name == "<package_name>")'Example 5.3. Example command
$ opm render \ registry.redhat.io/redhat/redhat-operator-index:v4.21 \ | jq -s '.[] | select( .schema == "olm.package") \ | select( .name == "openshift-pipelines-operator-rh")'Example 5.4. Example output
{ "schema": "olm.package", "name": "openshift-pipelines-operator-rh", "defaultChannel": "latest", "icon": { "base64data": "iVBORw0KGgoAAAANSUhE...", "mediatype": "image/png" } }
5.2.1.1. Common catalog queries Copia collegamentoCollegamento copiato negli appunti!
You can query catalogs by using the opm and jq CLI tools. The following tables show common catalog queries that you can use when installing, updating, and managing the lifecycle of extensions.
Command syntax
$ opm render <catalog_registry_url>:<tag> | <jq_request>
where:
catalog_registry_url-
Specifies the URL of the catalog registry, such as
registry.redhat.io/redhat/redhat-operator-index. tag-
Specifies the tag or version of the catalog, such as
v4.21orlatest. jq_request- Specifies the query you want to run on the catalog.
Example 5.5. Example command
$ opm render \
registry.redhat.io/redhat/redhat-operator-index:v4.21 \
| jq -cs '[.[] | select(.schema == "olm.bundle" and (.properties[] \
| select(.type == "olm.csv.metadata").value.installModes[] \
| select(.type == "AllNamespaces" and .supported == true)) \
and .spec.webhookdefinitions == null) \
| .package] | unique[]'
| Query | Request |
|---|---|
| Available packages in a catalog |
|
|
Packages that support |
|
| Package metadata |
|
| Catalog blobs in a package |
|
| Query | Request |
|---|---|
| Channels in a package |
|
| Versions in a channel |
|
|
|
| Query | Request |
|---|---|
| Bundles in a package |
|
|
|
5.2.2. Cluster extension permissions Copia collegamentoCollegamento copiato negli appunti!
In Operator Lifecycle Manager (OLM) Classic, a single service account with cluster administrator privileges manages all cluster extensions.
OLM v1 is designed to be more secure than OLM (Classic) by default. OLM v1 manages a cluster extension by using the service account specified in an extension’s custom resource (CR). Cluster administrators can create a service account for each cluster extension. As a result, administrators can follow the principle of least privilege and assign only the role-based access controls (RBAC) to install and manage that extension.
You must add each permission to either a cluster role or role. Then you must bind the cluster role or role to the service account with a cluster role binding or role binding.
You can scope the RBAC to either the cluster or to a namespace. Use cluster roles and cluster role bindings to scope permissions to the cluster. Use roles and role bindings to scope permissions to a namespace. Whether you scope the permissions to the cluster or to a namespace depends on the design of the extension you want to install and manage.
To simply the following procedure and improve readability, the following example manifest uses permissions that are scoped to the cluster. You can further restrict some of the permissions by scoping them to the namespace of the extension instead of the cluster.
If a new version of an installed extension requires additional permissions, OLM v1 halts the update process until a cluster administrator grants those permissions.
5.2.2.1. Creating a namespace Copia collegamentoCollegamento copiato negli appunti!
Before you create a service account to install and manage your cluster extension, you must create a namespace.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
Create a new namespace for the service account of the extension that you want to install by running the following command:
$ oc adm new-project <new_namespace>
5.2.2.2. Creating a service account for an extension Copia collegamentoCollegamento copiato negli appunti!
You must create a service account to install, manage, and update a cluster extension.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
Create a service account, similar to the following example:
apiVersion: v1 kind: ServiceAccount metadata: name: <extension>-installer namespace: <namespace>Example 5.6. Example
extension-service-account.yamlfileapiVersion: v1 kind: ServiceAccount metadata: name: pipelines-installer namespace: pipelinesApply the service account by running the following command:
$ oc apply -f extension-service-account.yaml
5.2.2.3. Downloading the bundle manifests of an extension Copia collegamentoCollegamento copiato negli appunti!
Use the opm CLI tool to download the bundle manifests of the extension that you want to install. Use the CLI tool or text editor of your choice to view the manifests and find the required permissions to install and manage the extension.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - You have decided which extension you want to install.
-
You have installed the
opmCLI tool.
Procedure
Inspect the available versions and images of the extension you want to install by running the following command:
$ opm render <registry_url>:<tag_or_version> | \ jq -cs '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "<extension_name>") | \ {"name":.name, "image":.image}'Example 5.7. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.21 | \ jq -cs '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ {"name":.name, "image":.image}'Example 5.8. Example output
{"name":"openshift-pipelines-operator-rh.v1.14.3","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:3f64b29f6903981470d0917b2557f49d84067bccdba0544bfe874ec4412f45b0"} {"name":"openshift-pipelines-operator-rh.v1.14.4","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec"} {"name":"openshift-pipelines-operator-rh.v1.14.5","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838"} {"name":"openshift-pipelines-operator-rh.v1.15.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:22be152950501a933fe6e1df0e663c8056ca910a89dab3ea801c3bb2dc2bf1e6"} {"name":"openshift-pipelines-operator-rh.v1.15.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:64afb32e3640bb5968904b3d1a317e9dfb307970f6fda0243e2018417207fd75"} {"name":"openshift-pipelines-operator-rh.v1.15.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd"} {"name":"openshift-pipelines-operator-rh.v1.16.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:a46b7990c0ad07dae78f43334c9bd5e6cba7b50ca60d3f880099b71e77bed214"} {"name":"openshift-pipelines-operator-rh.v1.16.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:29f27245e93b3f605647993884751c490c4a44070d3857a878d2aee87d43f85b"} {"name":"openshift-pipelines-operator-rh.v1.16.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2037004666526c90329f4791f14cb6cc06e8775cb84ba107a24cc4c2cf944649"} {"name":"openshift-pipelines-operator-rh.v1.17.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:d75065e999826d38408049aa1fde674cd1e45e384bfdc96523f6bad58a0e0dbc"}Make a directory to extract the image of the bundle that you want to install by running the following command:
$ mkdir <new_dir>Change into the directory by running the following command:
$ cd <new_dir>Find the image reference of the version that you want to install and run the following command:
$ oc image extract <full_path_to_registry_image>@sha256:<sha>Example command
$ oc image extract registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838Change into the
manifestsdirectory by running the following command:$ cd manifestsView the contents of the manifests directory by entering the following command. The output lists the manifests of the resources required to install, manage, and operate your extension.
$ treeExample 5.9. Example output
. ├── manifests │ ├── config-logging_v1_configmap.yaml │ ├── openshift-pipelines-operator-monitor_monitoring.coreos.com_v1_servicemonitor.yaml │ ├── openshift-pipelines-operator-prometheus-k8s-read-binding_rbac.authorization.k8s.io_v1_rolebinding.yaml │ ├── openshift-pipelines-operator-read_rbac.authorization.k8s.io_v1_role.yaml │ ├── openshift-pipelines-operator-rh.clusterserviceversion.yaml │ ├── operator.tekton.dev_manualapprovalgates.yaml │ ├── operator.tekton.dev_openshiftpipelinesascodes.yaml │ ├── operator.tekton.dev_tektonaddons.yaml │ ├── operator.tekton.dev_tektonchains.yaml │ ├── operator.tekton.dev_tektonconfigs.yaml │ ├── operator.tekton.dev_tektonhubs.yaml │ ├── operator.tekton.dev_tektoninstallersets.yaml │ ├── operator.tekton.dev_tektonpipelines.yaml │ ├── operator.tekton.dev_tektonresults.yaml │ ├── operator.tekton.dev_tektontriggers.yaml │ ├── tekton-config-defaults_v1_configmap.yaml │ ├── tekton-config-observability_v1_configmap.yaml │ ├── tekton-config-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml │ ├── tekton-config-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ ├── tekton-operator-controller-config-leader-election_v1_configmap.yaml │ ├── tekton-operator-info_rbac.authorization.k8s.io_v1_rolebinding.yaml │ ├── tekton-operator-info_rbac.authorization.k8s.io_v1_role.yaml │ ├── tekton-operator-info_v1_configmap.yaml │ ├── tekton-operator_v1_service.yaml │ ├── tekton-operator-webhook-certs_v1_secret.yaml │ ├── tekton-operator-webhook-config-leader-election_v1_configmap.yaml │ ├── tekton-operator-webhook_v1_service.yaml │ ├── tekton-result-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml │ └── tekton-result-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml ├── metadata │ ├── annotations.yaml │ └── properties.yaml └── root └── buildinfo ├── content_manifests │ └── openshift-pipelines-operator-bundle-container-v1.16.2-3.json └── Dockerfile-openshift-pipelines-pipelines-operator-bundle-container-v1.16.2-3
Next steps
-
View the contents of the
install.spec.clusterpermissionsstanza of cluster service version (CSV) file in themanifestsdirectory using your preferred CLI tool or text editor. The following examples reference theopenshift-pipelines-operator-rh.clusterserviceversion.yamlfile of the Red Hat OpenShift Pipelines Operator. - Keep this file open as a reference while assigning permissions to the cluster role file in the following procedure.
5.2.2.4. Required permissions to install and manage a cluster extension Copia collegamentoCollegamento copiato negli appunti!
You must inspect the manifests included in the bundle image of a cluster extension to assign the necessary permissions. The service account requires enough role-based access controls (RBAC) to create and manage the following resources.
Follow the principle of least privilege and scope permissions to specific resource names with the least RBAC required to run.
- Admission plugins
-
Because OpenShift Container Platform clusters use the
OwnerReferencesPermissionEnforcementadmission plugin, cluster extensions must have permissions to update theblockOwnerDeletionandownerReferencesfinalizers. - Cluster role and cluster role bindings for the controllers of the extension
- You must define RBAC so that the installation service account can create and manage cluster roles and cluster role bindings for the extension controllers.
- Cluster service version (CSV)
- You must define RBAC for the resources defined in the CSV of the cluster extension.
- Cluster-scoped bundle resources
-
You must define RBAC to create and manage any cluster-scoped resources included in the bundle. If the cluster-scoped resources matches another resource type, such as a
ClusterRole, you can add the resource to the pre-existing rule under theresourcesorresourceNamesfield. - Custom resource definitions (CRDs)
- You must define RBAC so that the installation service account can create and manage the CRDs for the extension. Also, you must grant the service account for the controller of the extension the RBAC to manage its CRDs.
- Deployments
- You must define RBAC for the installation service account to create and manage the deployments needed by the extension controller, such as services and config maps.
- Extension permissions
- You must include RBAC for the permissions and cluster permissions defined in the CSV. The installation service account needs the ability to grant these permissions to the extension controller, which needs these permissions to run.
- Namespace-scoped bundle resources
- You must define RBAC for any namespace-scoped bundle resources. The installation service account requires permission to create and manage resources, such as config maps or services.
- Roles and role bindings
- You must define RBAC for any roles or role bindings defined in the CSV. The installation service account needs permission to create and manage those roles and role bindings.
- Service accounts
- You must define RBAC so that the installation service account can create and manage the service accounts for the extension controllers.
5.2.2.5. Creating a cluster role for an extension Copia collegamentoCollegamento copiato negli appunti!
You must review the install.spec.clusterpermissions stanza of the cluster service version (CSV) and the manifests of an extension carefully to define the required role-based access controls (RBAC) of the extension that you want to install. You must create a cluster role by copying the required RBAC from the CSV to the new manifest.
If you want to test the process for installing and updating an extension in OLM v1, you can use the following cluster role to grant cluster administrator permissions. This manifest is for testing purposes only. It should not be used in production clusters.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <extension>-installer-clusterrole
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
The following procedure uses the openshift-pipelines-operator-rh.clusterserviceversion.yaml file of the Red Hat OpenShift Pipelines Operator as an example. The examples include excerpts of the RBAC required to install and manage the OpenShift Pipelines Operator. For a complete manifest, see "Example cluster role for the Red Hat OpenShift Pipelines Operator".
To simply the following procedure and improve readability, the following example manifest uses permissions that are scoped to the cluster. You can further restrict some of the permissions by scoping them to the namespace of the extension instead of the cluster.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - You have downloaded the manifests in the image reference of the extension that you want to install.
Procedure
Create a new cluster role manifest, similar to the following example:
Example
<extension>-cluster-role.yamlfileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterroleEdit your cluster role manifest to include permission to update finalizers on the extension, similar to the following example:
Example <extension>-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: - olm.operatorframework.io resources: - clusterextensions/finalizers verbs: - update # Scoped to the name of the ClusterExtension resourceNames: - <metadata_name>1 - 1
- Specifies the value from the
metadata.namefield from the custom resource (CR) of the extension.
Search for the
clusterroleandclusterrolebindingsvalues in therules.resourcesfield in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
Example cluster role manifest
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # ClusterRoles and ClusterRoleBindings for the controllers of the extension - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - create1 - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - get - update - patch - delete resourceNames:2 - "*" - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - create - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - get - update - patch - delete resourceNames: - "*" # ...- 1
- You cannot scope
create,list, andwatchpermissions to specific resource names (theresourceNamesfield). You must scope these permissions to their resources (theresourcesfield). - 2
- Some resource names are generated by using the following format:
<package_name>.<hash>. After you install the extension, look up the resource names for the cluster roles and cluster role bindings for the controller of the extension. Replace the wildcard characters in this example with the generated names and follow the principle of least privilege.
Search for the
customresourcedefinitionsvalue in therules.resourcesfield in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Custom resource definitions of the extension - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - update - patch - delete resourceNames: - manualapprovalgates.operator.tekton.dev - openshiftpipelinesascodes.operator.tekton.dev - tektonaddons.operator.tekton.dev - tektonchains.operator.tekton.dev - tektonconfigs.operator.tekton.dev - tektonhubs.operator.tekton.dev - tektoninstallersets.operator.tekton.dev - tektonpipelines.operator.tekton.dev - tektonresults.operator.tekton.dev - tektontriggers.operator.tekton.dev # ...
Search the CSV file for stanzas with the
permissionsandclusterPermissionsvalues in therules.resourcesspec.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Excerpt from install.spec.clusterPermissions - apiGroups: - '' resources: - nodes - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets - pods/log - limitranges verbs: - create - list - watch - delete - deletecollection - patch - get - update - apiGroups: - extensions - apps resources: - ingresses - ingresses/status verbs: - create - list - watch - delete - patch - get - update # ...
Search the CSV file for resources under the
install.spec.deploymentsstanza.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Excerpt from install.spec.deployments - apiGroups: - apps resources: - deployments verbs: - create - list - watch - apiGroups: - apps resources: - deployments verbs: - get - update - patch - delete # scoped to the extension controller deployment name resourceNames: - openshift-pipelines-operator - tekton-operator-webhook # ...
Search for the
servicesandconfigmapsvalues in therules.resourcesfield in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Services - apiGroups: - "" resources: - services verbs: - create - apiGroups: - "" resources: - services verbs: - get - list - watch - update - patch - delete # scoped to the service name resourceNames: - openshift-pipelines-operator-monitor - tekton-operator - tekton-operator-webhook # configmaps - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - configmaps verbs: - get - list - watch - update - patch - delete # scoped to the configmap name resourceNames: - config-logging - tekton-config-defaults - tekton-config-observability - tekton-operator-controller-config-leader-election - tekton-operator-info - tekton-operator-webhook-config-leader-election - apiGroups: - operator.tekton.dev resources: - tekton-config-read-role - tekton-result-read-role verbs: - get - watch - list
Add the cluster role manifest to the cluster by running the following command:
$ oc apply -f <extension>-installer-clusterrole.yamlExample command
$ oc apply -f pipelines-installer-clusterrole.yaml
5.2.2.6. Example cluster role for the Red Hat OpenShift Pipelines Operator Copia collegamentoCollegamento copiato negli appunti!
See the following example for a complete cluster role manifest for the OpenShift Pipelines Operator.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pipelines-installer-clusterrole
rules:
- apiGroups:
- olm.operatorframework.io
resources:
- clusterextensions/finalizers
verbs:
- update
# Scoped to the name of the ClusterExtension
resourceNames:
- pipes # the value from <metadata.name> from the extension's custom resource (CR)
# ClusterRoles and ClusterRoleBindings for the controllers of the extension
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- create
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- get
- update
- patch
- delete
resourceNames:
- "*"
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
verbs:
- create
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
verbs:
- get
- update
- patch
- delete
resourceNames:
- "*"
# Extension's custom resource definitions
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- update
- patch
- delete
resourceNames:
- manualapprovalgates.operator.tekton.dev
- openshiftpipelinesascodes.operator.tekton.dev
- tektonaddons.operator.tekton.dev
- tektonchains.operator.tekton.dev
- tektonconfigs.operator.tekton.dev
- tektonhubs.operator.tekton.dev
- tektoninstallersets.operator.tekton.dev
- tektonpipelines.operator.tekton.dev
- tektonresults.operator.tekton.dev
- tektontriggers.operator.tekton.dev
- apiGroups:
- ''
resources:
- nodes
- pods
- services
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
- pods/log
- limitranges
verbs:
- create
- list
- watch
- delete
- deletecollection
- patch
- get
- update
- apiGroups:
- extensions
- apps
resources:
- ingresses
- ingresses/status
verbs:
- create
- list
- watch
- delete
- patch
- get
- update
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
- deployments/finalizers
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
- roles
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- bind
- escalate
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- impersonate
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- rolebindings
verbs:
- get
- update
- delete
- patch
- create
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
- customresourcedefinitions/status
verbs:
- get
- create
- update
- delete
- list
- patch
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- build.knative.dev
resources:
- builds
- buildtemplates
- clusterbuildtemplates
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- extensions
resources:
- deployments
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- extensions
resources:
- deployments/finalizers
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- operator.tekton.dev
resources:
- '*'
- tektonaddons
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- tekton.dev
- triggers.tekton.dev
- operator.tekton.dev
- pipelinesascode.tekton.dev
resources:
- '*'
verbs:
- add
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- dashboard.tekton.dev
resources:
- '*'
- tektonaddons
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
- get
- list
- create
- update
- delete
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
- apiGroups:
- route.openshift.io
resources:
- routes
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
- apiGroups:
- console.openshift.io
resources:
- consoleyamlsamples
- consoleclidownloads
- consolequickstarts
- consolelinks
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- delete
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- delete
- deletecollection
- create
- patch
- get
- list
- update
- watch
- apiGroups:
- ''
resources:
- namespaces/finalizers
verbs:
- update
- apiGroups:
- resolution.tekton.dev
resources:
- resolutionrequests
- resolutionrequests/status
verbs:
- get
- list
- watch
- create
- delete
- update
- patch
- apiGroups:
- console.openshift.io
resources:
- consoleplugins
verbs:
- get
- list
- watch
- create
- delete
- update
- patch
# Deployments specified in install.spec.deployments
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- list
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- update
- patch
- delete
# scoped to the extension controller deployment name
resourceNames:
- openshift-pipelines-operator
- tekton-operator-webhook
# Service accounts in the CSV
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- get
- update
- patch
- delete
# scoped to the extension controller's deployment service account
resourceNames:
- openshift-pipelines-operator
# Services
- apiGroups:
- ""
resources:
- services
verbs:
- create
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- update
- patch
- delete
# scoped to the service name
resourceNames:
- openshift-pipelines-operator-monitor
- tekton-operator
- tekton-operator-webhook
# configmaps
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- patch
- delete
# scoped to the configmap name
resourceNames:
- config-logging
- tekton-config-defaults
- tekton-config-observability
- tekton-operator-controller-config-leader-election
- tekton-operator-info
- tekton-operator-webhook-config-leader-election
- apiGroups:
- operator.tekton.dev
resources:
- tekton-config-read-role
- tekton-result-read-role
verbs:
- get
- watch
- list
---
5.2.2.7. Creating a cluster role binding for an extension Copia collegamentoCollegamento copiato negli appunti!
After you have created a service account and cluster role, you must bind the cluster role to the service account with a cluster role binding manifest.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. You have created and applied the following resources for the extension you want to install:
- Namespace
- Service account
- Cluster role
Procedure
Create a cluster role binding to bind the cluster role to the service account, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <extension>-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <extension>-installer-clusterrole subjects: - kind: ServiceAccount name: <extension>-installer namespace: <namespace>Example 5.10. Example
pipelines-cluster-role-binding.yamlfileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pipelines-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-installer-clusterrole subjects: - kind: ServiceAccount name: pipelines-installer namespace: pipelinesApply the cluster role binding by running the following command:
$ oc apply -f pipelines-cluster-role-binding.yaml
5.2.3. Installing a cluster extension in all namespaces Copia collegamentoCollegamento copiato negli appunti!
You can install an extension from a catalog by creating a custom resource (CR) and applying it to the cluster. Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions, including OLM (Classic) Operators in the registry+v1 bundle format, that are scoped to the cluster. For more information, see Supported extensions.
For OpenShift Container Platform 4.21, documented procedures for OLM v1 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing Software Catalog and Installed Operators pages do not yet display OLM v1 components.
Prerequisites
- You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension that you want to install. For more information, see "Cluster extension permissions".
Procedure
Create a CR, similar to the following example:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace>1 serviceAccount: name: <service_account_installer_name>2 source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel_name>3 version: <version_or_version_range>4 upgradeConstraintPolicy: CatalogProvided5 - 1
- Specifies the namespace where you want the bundle installed, such as
pipelinesormy-extension. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. - 2
- Specifies the name of the service account you created to install, update, and manage your extension.
- 3
- Optional: Specifies channel names as an array, such as
pipelines-1.14orlatest. - 4
- Optional: Specifies the version or version range, such as
1.14.0,1.14.x, or>=1.16, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". - 5
- Optional: Specifies the upgrade constraint policy. If unspecified, the default setting is
CatalogProvided. TheCatalogProvidedsetting only updates if the new version satisfies the upgrade constraints set by the package author. To force an update or rollback, set the field toSelfCertified. For more information, see "Forcing an update or rollback".
Example pipelines-operator.yaml CR
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: pipelines-operator
spec:
namespace: pipelines
serviceAccount:
name: pipelines-installer
source:
sourceType: Catalog
catalog:
packageName: openshift-pipelines-operator-rh
version: "1.14.x"
Apply the CR to the cluster by running the following command:
$ oc apply -f pipeline-operator.yamlExample output
clusterextension.olm.operatorframework.io/pipelines-operator created
Verification
View the Operator or extension’s CR in the YAML format by running the following command:
$ oc get clusterextension pipelines-operator -o yamlExample 5.11. Example output
apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 name: pipelines-operator resourceVersion: "72725" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 1.14.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 successfully observedGeneration: 1 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 1 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.14.5 version: 1.14.5 kind: List metadata: resourceVersion: ""where:
spec.channel- Displays the channel defined in the CR of the extension.
spec.version- Displays the version or version range defined in the CR of the extension.
status.conditions- Displays information about the status and health of the extension.
type: DeprecatedDisplays whether one or more of following are deprecated:
type: PackageDeprecated- Displays whether the resolved package is deprecated.
type: ChannelDeprecated- Displays whether the resolved channel is deprecated.
type: BundleDeprecated- Displays whether the resolved bundle is deprecated.
The value of
Falsein thestatusfield indicates that thereason: Deprecatedcondition is not deprecated. The value ofTruein thestatusfield indicates that thereason: Deprecatedcondition is deprecated.installedBundle.name- Displays the name of the bundle installed.
installedBundle.version- Displays the version of the bundle installed.
5.2.4. Configuring a watch namespace for a cluster extension (Technology Preview) Copia collegamentoCollegamento copiato negli appunti!
You can configure the watch namespace for extensions that support namespace-scoped resource watching.
Configuring watch namespace for a cluster extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have enabled the
TechPreviewNoUpgradefeature set on the cluster. - You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension. For more information, see "Cluster extension permissions".
-
You have verified the supported install modes for the extension and determined the required
watchNamespaceconfiguration.
Procedure
Create a custom resource (CR) based on where you want the extension to watch for resources:
To configure the extension to watch its own installation namespace:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <installation_namespace> serviceAccount: name: <service_account> source: sourceType: Catalog catalog: packageName: <package_name> version: <version> upgradeConstraintPolicy: CatalogProvidedwhere:
config.inline.watchNamespace- Specifies the namespace to watch for resources. For requirements and valid values, see "Extension configuration".
To configure the extension to watch a different namespace:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <watched_namespace> serviceAccount: name: <service_account> source: sourceType: Catalog catalog: packageName: <package_name> version: <version> upgradeConstraintPolicy: CatalogProvided
Apply the CR to the cluster by running the following command:
$ oc apply -f <cluster_extension_cr>.yaml
Verification
Verify that the extension installed successfully by running the following command:
$ oc get clusterextension <extension_name> -o yamlExample output
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <installation_namespace> status: conditions: - type: Installed status: "True" reason: Succeeded
5.2.5. Preflight permissions check for cluster extensions (Technology Preview) Copia collegamentoCollegamento copiato negli appunti!
When you try to install an extension, the Operator Controller performs a dry run of the installation process. This dry run verifies that the specified service account can perform all the actions required to install the extension. This includes creating all the Kubernetes objects in the bundle and the role-based access control (RBAC) rules for the roles and bindings defined by the bundle.
The preflight permissions check for cluster extensions is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
If the service account is missing any required RBAC rules, the preflight check fails before the actual installation proceeds. If the preflight check fails, the Operator Controller reports the errors in the status conditions of the extension and in the logs of the Operator Controller.
To proceed with the installation, update the roles and bindings to grant the missing permissions to the service account and apply the changes. If there are no errors, the Operator Controller reconciles the updated permissions and completes the installation.
5.2.5.1. Example report from the preflight permissions check Copia collegamentoCollegamento copiato negli appunti!
The following report indicates that the service account requires the following missing permissions:
-
RBAC rules to perform
listandwatchactions for theservicesresource in the core API group for the entire cluster -
RBAC rules to perform
createactions fordeploymentsresources in theappsAPI group for thepipelinesnamespace
You can access the reports from the preflight permissions check in the status conditions of the cluster extension. The oc describe clusterextension command prints information about a cluster extension, including the status conditions.
Example command
$ oc describe clusterextension <extension_name>
Example report
apiVersion: v1
items:
- apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
...
Conditions:
Type: Progressing
Status: False
Reason: Retrying
Message: pre-authorization failed: service account requires the following permissions to manage cluster extension:
Namespace:"" APIGroups:[] Resources:[services] Verbs:[list,watch]
Namespace:"pipelines" APIGroups:["apps"] Resources:[deployments] Verbs:[create]
Namespace-
Specifies the scope of the required RBAC rules at the namespace level, for example the
pipelinesnamespace. An empty namespace value,"", indicates that you must scope the permission to the cluster. APIGroupsSpecifies the name of the API group the required permissions apply to. An empty value in the API group,
[], indicates the permissions apply to the core API group. For example, services, secrets, and config maps are all core resources.If a resource belongs to a named API group, the report lists the name in between the brackets. For example, the value of
APIGroups:[apps]indicates the extension requires RBAC rules to act on resources in theappsAPI group.Resources- Specifies the resource types that require permissions. For example, services, secrets, and custom resource definitions are common resource types.
Verbs- Specifies the actions, or verbs, that the service account needs permission to perform. If the report lists several verbs, all of the listed verbs require RBAC rules.
5.2.5.2. Common permission errors Copia collegamentoCollegamento copiato negli appunti!
- Missing verbs
- The service account does not have permission to perform a required action. To resolve this issue, update or create a role and binding to grant the necessary permissions. Roles and role bindings define resource permissions for a namespace. Cluster roles and cluster role bindings define resource permissions for the cluster.
- Privilege escalation
- The service account does not have enough permission to create a role or cluster role that the extension needs. When this happens, the preflight check reports the verbs as missing to prevent privilege escalation. To resolve this issue, grant enough permission to the service account so that it can create the roles.
- Missing role references
-
The extension references a role or cluster role that the Operator Controller cannot find. When this happens, the preflight check lists the missing role and reports an
authorization evalutation error. To resolve the issue, create or update the roles and cluster roles to ensure that all role references exist.
5.2.6. Updating a cluster extension Copia collegamentoCollegamento copiato negli appunti!
You can update your cluster extension or Operator by manually editing the custom resource (CR) and applying the changes.
Prerequisites
- You have an Operator or extension installed.
-
You have installed the
jqCLI tool. -
You have installed the
opmCLI tool.
Procedure
Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps:
Get a list of channels from a selected package by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select( .package == "openshift-pipelines-operator-rh") | .name'Example 5.12. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.21 \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select( .package == "openshift-pipelines-operator-rh") | .name'Example 5.13. Example output
"latest" "pipelines-1.14" "pipelines-1.15" "pipelines-1.16" "pipelines-1.17"Get a list of the versions published in a channel by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .package == "<package_name>" ) \ | select( .schema == "olm.channel" ) \ | select( .name == "<channel_name>" ) | .entries \ | .[] | .name'Example 5.14. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.21 \ | jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) \ | select( .schema == "olm.channel" ) | select( .name == "latest" ) \ | .entries | .[] | .name'Example 5.15. Example output
"openshift-pipelines-operator-rh.v1.15.0" "openshift-pipelines-operator-rh.v1.16.0" "openshift-pipelines-operator-rh.v1.17.0" "openshift-pipelines-operator-rh.v1.17.1"
Find out what version or channel is specified in your Operator or extension’s CR by running the following command:
$ oc get clusterextension <operator_name> -o yamlExample command
$ oc get clusterextension pipelines-operator -o yamlExample 5.16. Example output
apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 name: pipelines-operator resourceVersion: "72725" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 1.14.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 successfully observedGeneration: 1 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 1 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.14.5 version: 1.14.5 kind: List metadata: resourceVersion: ""Edit your CR by using one of the following methods:
If you want to pin your Operator or extension to specific version, such as
1.15.0, edit your CR similar to the following example:Example
pipelines-operator.yamlCRapiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh version: "1.15.0"1 - 1
- Update the version from
1.14.xto1.15.0
If you want to define a range of acceptable update versions, edit your CR similar to the following example:
Example CR with a version range specified
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh version: ">1.15, <1.17"1 - 1
- Specifies that the desired version range is greater than version
1.15and less than1.17. For more information, see "Support for version ranges" and "Version comparison strings".
If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example:
Example CR with a specified channel
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh channels: - latest1 - 1
- Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Enter values as an array.
If you want to specify a channel and version or version range, edit your CR similar to the following example:
Example CR with a specified channel and version range
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh channels: - latest version: "<1.16"For more information, see "Example custom resources (CRs) that specify a target version".
Apply the update to the cluster by running the following command:
$ oc apply -f pipelines-operator.yamlExample output
clusterextension.olm.operatorframework.io/pipelines-operator configured
Verification
Verify that the channel and version updates have been applied by running the following command:
$ oc get clusterextension pipelines-operator -o yamlExample 5.17. Example output
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"\u003c1.16"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 2 name: pipes resourceVersion: "90693" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: <1.16 sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd successfully observedGeneration: 2 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 2 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.15.2 version: 1.15.2
Troubleshooting
If you specify a target version or channel that is deprecated or does not exist, you can run the following command to check the status of your extension:
$ oc get clusterextension <operator_name> -o yamlExample 5.18. Example output for a version that does not exist
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"9.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 3 name: pipes resourceVersion: "93334" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 9.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd successfully observedGeneration: 3 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: 'error upgrading from currently installed version "1.15.2": no bundles found for package "openshift-pipelines-operator-rh" matching version "9.x"' observedGeneration: 3 reason: Retrying status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.15.2 version: 1.15.2
5.2.7. Deleting an Operator Copia collegamentoCollegamento copiato negli appunti!
You can delete an Operator and its custom resource definitions (CRDs) by deleting the ClusterExtension custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
$ oc delete clusterextension <operator_name>Example output
clusterextension.olm.operatorframework.io "<operator_name>" deleted
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
$ oc get clusterextensionsExample output
No resources foundVerify that the Operator’s system namespace is deleted by running the following command:
$ oc get ns <operator_name>-systemExample output
Error from server (NotFound): namespaces "<operator_name>-system" not found
5.3. Configuring cluster extensions Copia collegamentoCollegamento copiato negli appunti!
In Operator Lifecycle Manager (OLM) v1, extensions watch all namespaces by default. Some Operators support only namespace-scoped watching based on OLM (Classic) install modes. To install these Operators, configure the watch namespace for the extension. For more information, see "Discovering bundle install modes".
Configuring a watch namespace for a cluster extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
5.3.1. Extension configuration Copia collegamentoCollegamento copiato negli appunti!
Configure the namespace an extension watches by using the .spec.config field in the ClusterExtension resource.
OLM v1 configuration API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Extensions watch all namespaces by default. Some Operators support only namespace-scoped watching based on OLM (Classic) install modes. Configure the .spec.config.inline.watchNamespace field to install these Operators.
Whether you must configure this field depends on the install modes supported by the bundle.
5.3.1.1. Configuration API structure Copia collegamentoCollegamento copiato negli appunti!
The configuration API uses an opaque structure. The bundle validates the configuration values, not OLM v1. Operator authors can define their own configuration requirements.
Currently, the Inline configuration type is the only supported type:
Example inline configuration
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <extension_name>
...
spec:
namespace: <installation_namespace>
config:
configType: Inline
inline:
watchNamespace: <watch_namespace>
where:
<installation_namespace>- Specifies the namespace where the extension components run.
config.configType-
Specifies the configuration type. Currently,
Inlineis the only supported type. <watch_namespace>- Specifies the namespace where the extension watches for custom resources. The watch namespace can match or differ from the installation namespace, depending on the install modes supported by the bundle.
5.3.2. Watch namespace configuration requirements Copia collegamentoCollegamento copiato negli appunti!
Avoid installation failures by using the correct watchNamespace value for the install modes supported by your bundle. Requirements vary based on whether the bundle supports AllNamespaces, OwnNamespace, and SingleNamespace install modes.
OLM (Classic) registry+v1 bundles declare the install modes they support. These install modes control whether watchNamespace configuration is required or optional, and what values are valid.
OLM v1 does not support multi-tenancy. You cannot install the same extension more than once on a cluster. As a result, the MultiNamespace install mode is not supported.
AllNamespaces- Watches resources across all namespaces in the cluster.
OwnNamespace- Watches resources only in the installation namespace.
SingleNamespace- Watches resources in a single namespace that differs from the installation namespace.
Whether the .spec.config.inline.watchNamespace field is required depends on the install modes that the bundle supports.
| Bundle install mode support | watchNamespace field | Valid values |
|---|---|---|
|
| Not applicable |
The |
|
| Required |
Must match |
|
| Required |
Must differ from |
|
Both | Required |
Can match or differ from |
|
| Optional | Omit to watch all namespaces, or specify a namespace to watch only that namespace |
OLM v1 validates the watchNamespace value based on the install mode support that is declared by the bundle. The installation fails with a validation error if you specify an invalid value or omit a required field.
5.3.3. Discovering bundle install modes Copia collegamentoCollegamento copiato negli appunti!
You can render the bundle metadata to find which install modes a bundle supports.
Prerequisites
-
You have installed the
jqCLI tool. -
You have installed the
opmCLI tool.
Procedure
Render the bundle metadata by running the following command:
$ opm render <bundle_image> -o json | \ jq 'select(.schema == "olm.bundle") | .properties[] | select(.type == "olm.bundle.object")'Example output
{ "type": "olm.bundle.object", "value": { "data": "...", "ref": "olm.csv" } }Decode the base64-encoded CSV data to view install mode declarations:
$ echo "<base64_data>" | base64 -d | jq '.spec.installModes'Example output
[ { "type": "OwnNamespace", "supported": true }, { "type": "SingleNamespace", "supported": true }, { "type": "MultiNamespace", "supported": false }, { "type": "AllNamespaces", "supported": false } ]In this example, the bundle supports both
OwnNamespaceandSingleNamespacemodes. The.spec.config.inline.watchNamespacefield is required and can match or differ from the.spec.namespacefield.
5.3.4. Configuring a watch namespace for a cluster extension (Technology Preview) Copia collegamentoCollegamento copiato negli appunti!
You can configure the watch namespace for extensions that support namespace-scoped resource watching.
Configuring watch namespace for a cluster extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
You have enabled the
TechPreviewNoUpgradefeature set on the cluster. - You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension. For more information, see "Cluster extension permissions".
-
You have verified the supported install modes for the extension and determined the required
watchNamespaceconfiguration.
Procedure
Create a custom resource (CR) based on where you want the extension to watch for resources:
To configure the extension to watch its own installation namespace:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <installation_namespace> serviceAccount: name: <service_account> source: sourceType: Catalog catalog: packageName: <package_name> version: <version> upgradeConstraintPolicy: CatalogProvidedwhere:
config.inline.watchNamespace- Specifies the namespace to watch for resources. For requirements and valid values, see "Extension configuration".
To configure the extension to watch a different namespace:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <watched_namespace> serviceAccount: name: <service_account> source: sourceType: Catalog catalog: packageName: <package_name> version: <version> upgradeConstraintPolicy: CatalogProvided
Apply the CR to the cluster by running the following command:
$ oc apply -f <cluster_extension_cr>.yaml
Verification
Verify that the extension installed successfully by running the following command:
$ oc get clusterextension <extension_name> -o yamlExample output
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <extension_name> spec: namespace: <installation_namespace> config: configType: Inline inline: watchNamespace: <installation_namespace> status: conditions: - type: Installed status: "True" reason: Succeeded
5.3.4.1. Watch namespace configuration examples Copia collegamentoCollegamento copiato negli appunti!
To configure the watchNamespace field correctly for your bundle’s install mode, see the following examples. These show valid configurations for Operators that support the AllNamespaces, OwnNamespace, and SingleNamespace install modes.
Example AllNamespaces install mode
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: example-extension
spec:
namespace: openshift-operators
serviceAccount:
name: example-sa
source:
sourceType: Catalog
catalog:
packageName: example-operator
-
The
configfield is omitted. The extension watches all namespaces by default.
Example OwnNamespace install mode
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: example-extension
spec:
namespace: example-operators
config:
configType: Inline
inline:
watchNamespace: example-operators
serviceAccount:
name: example-sa
source:
sourceType: Catalog
catalog:
packageName: example-operator
-
You must set the
watchNamespacefield to use theOwnNamespaceinstall mode. -
The
watchNamespacevalue must match thespec.namespacefield value.
Example SingleNamespace install mode
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: example-extension
spec:
namespace: example-operators
config:
configType: Inline
inline:
watchNamespace: production
serviceAccount:
name: example-sa
source:
sourceType: Catalog
catalog:
packageName: example-operator
-
You must set the
watchNamespacefield to use theSingleNamespaceinstall mode. -
The
watchNamespacevalue must differ from thespec.namespacefield value. -
In this example, the extension runs in the
example-operatorsnamespace but watches resources in theproductionnamespace.
5.3.4.2. Watch namespace validation errors Copia collegamentoCollegamento copiato negli appunti!
Validation errors occur when the watchNamespace field is omitted or contains an invalid value for the install modes supported by the bundle.
| Error | Cause | Resolution |
|---|---|---|
| Required field missing |
The bundle requires the |
Add the |
|
|
The bundle only supports |
Set the |
|
|
The bundle only supports |
Set the |
| Invalid configuration |
The |
Verify the configuration follows the correct API structure with |
5.4. User access to extension resources Copia collegamentoCollegamento copiato negli appunti!
After a cluster extension has been installed and is being managed by Operator Lifecycle Manager (OLM) v1, the extension can often provide CustomResourceDefinition objects (CRDs) that expose new API resources on the cluster. Cluster administrators typically have full management access to these resources by default, whereas non-cluster administrator users, or regular users, might lack sufficient permissions.
OLM v1 does not automatically configure or manage role-based access control (RBAC) for regular users to interact with the APIs provided by installed extensions. Cluster administrators must define the required RBAC policy to create, view, or edit these custom resources (CRs) for such users.
The RBAC permissions described for user access to extension resources are different from the permissions that must be added to a service account to enable OLM v1-based initial installation of a cluster extension itself. For more on RBAC requirements while installing an extension, see "Cluster extension permissions" in "Managing extensions".
5.4.1. Common default cluster roles for users Copia collegamentoCollegamento copiato negli appunti!
An installed cluster extension might include default cluster roles to determine role-based access control (RBAC) for regular users to API resources provided by the extension. A common set of cluster roles can resemble the following policies:
viewcluster role- Grants read-only access to all custom resource (CR) objects of specified API resources across the cluster. Intended for regular users who require visibility into the resources without any permissions to modify them. Ideal for monitoring purposes and limited access viewing.
editcluster role- Allows users to modify all CR objects within the cluster. Enables users to create, update, and delete resources, making it suitable for team members who must manage resources but should not control RBAC or manage permissions for others.
admincluster role-
Provides full permissions, including
create,update, anddeleteverbs, over all custom resource objects for the specified API resources across the cluster.
5.4.2. Finding API groups and resources exposed by a cluster extension Copia collegamentoCollegamento copiato negli appunti!
To create appropriate RBAC policies for granting user access to cluster extension resources, you must know which API groups and resources are exposed by the installed extension. As an administrator, you can inspect custom resource definitions (CRDs) installed on the cluster by using OpenShift CLI (oc).
Prerequisites
- A cluster extension has been installed on your cluster.
Procedure
To list installed CRDs while specifying a label selector targeting a specific cluster extension by name to find only CRDs owned by that extension, run the following command:
$ oc get crds -l 'olm.operatorframework.io/owner-kind=ClusterExtension,olm.operatorframework.io/owner-name=<cluster_extension_name>'Alternatively, you can search through all installed CRDs and individually inspect them by CRD name:
List all available custom resource definitions (CRDs) currently installed on the cluster by running the following command:
$ oc get crdsFind the CRD you are looking for in the output.
Inspect the individual CRD further to find its API groups by running the following command:
$ oc get crd <crd_name> -o yaml
5.4.3. Granting user access to extension resources by using custom role bindings Copia collegamentoCollegamento copiato negli appunti!
As a cluster administrator, you can manually create and configure role-based access control (RBAC) policies to grant user access to extension resources by using custom role bindings.
Prerequisites
- A cluster extension has been installed on your cluster.
- You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension".
Procedure
If the installed cluster extension does not provide default cluster roles, manually create one or more roles:
Consider the use cases for the set of roles described in "Common default cluster roles for users".
For example, create one or more of the following
ClusterRoleobject definitions, replacing<cluster_extension_api_group>and<cluster_extension_custom_resource>with the actual API group and resource names provided by the installed cluster extension:Example
view-custom-resource.yamlfileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - get - list - watchExample
edit-custom-resource.yamlfileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: edit-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - get - list - watch - create - update - patch - deleteExample
admin-custom-resource.yamlfileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: admin-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - '*'1 - 1
- Setting a wildcard (
*) inverbsallows all actions on the specified resources.
Create the cluster roles by running the following command for any YAML files you created:
$ oc create -f <filename>.yaml
Associate a cluster role to specific users or groups to grant them the necessary permissions for the resource by binding the cluster roles to individual user or group names:
Create an object definition for either a cluster role binding to grant access across all namespaces or a role binding to grant access within a specific namespace:
The following example cluster role bindings grant read-only
viewaccess to the custom resource across all namespaces:Example
ClusterRoleBindingobject for a userapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: view-custom-resource-binding subjects: - kind: User name: <user_name> roleRef: kind: ClusterRole name: view-custom-resource apiGroup: rbac.authorization.k8s.ioExample
ClusterRoleBindingobject for a userapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: view-custom-resource-binding subjects: - kind: Group name: <group_name> roleRef: kind: ClusterRole name: view-custom-resource apiGroup: rbac.authorization.k8s.ioThe following role binding restricts
editpermissions to a specific namespace:Example
RoleBindingobject for a userapiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit-custom-resource-edit-binding namespace: <namespace> subjects: - kind: User name: <username> roleRef: kind: Role name: custom-resource-edit apiGroup: rbac.authorization.k8s.io
- Save your object definition to a YAML file.
Create the object by running the following command:
$ oc create -f <filename>.yaml
5.4.4. Granting user access to extension resources by using aggregated cluster roles Copia collegamentoCollegamento copiato negli appunti!
As a cluster administrator, you can configure role-based access control (RBAC) policies to grant user access to extension resources by using aggregated cluster roles.
To automatically extend existing default cluster roles, you can add aggregation labels by adding one or more of the following labels to a ClusterRole object:
Aggregation labels in a ClusterRole object
# ..
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
# ..
This allows users who already have view, edit, or admin roles to interact with the custom resource specified by the ClusterRole object without requiring additional role or cluster role bindings to specific users or groups.
Prerequisites
- A cluster extension has been installed on your cluster.
- You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension".
Procedure
Create an object definition for a cluster role that specifies the API groups and resources provided by the cluster extension and add an aggregation label to extend one or more existing default cluster roles:
Example
ClusterRoleobject with an aggregation labelapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-custom-resource-aggregated labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resource> verbs: - get - list - watchYou can create similar
ClusterRoleobjects foreditandadminwith appropriate verbs, such ascreate,update, anddelete. By using aggregation labels, the permissions for the custom resources are added to the default roles.- Save your object definition to a YAML file.
Create the object by running the following command:
$ oc create -f <filename>.yaml
5.5. Update paths Copia collegamentoCollegamento copiato negli appunti!
When determining update paths, also known as upgrade edges or upgrade constraints, for an installed cluster extension, Operator Lifecycle Manager (OLM) v1 supports OLM (Classic) semantics starting in OpenShift Container Platform 4.16. This support follows the behavior from OLM (Classic), including replaces, skips, and skipRange directives, with a few noted differences.
By supporting OLM (Classic) semantics, OLM v1 accurately reflects the update graph from catalogs.
Differences from original OLM (Classic) implementation
If there are multiple possible successors, OLM v1 behavior differs in the following ways:
- In OLM (Classic), the successor closest to the channel head is chosen.
- In OLM v1, the successor with the highest semantic version (semver) is chosen.
Consider the following set of file-based catalog (FBC) channel entries:
# ... - name: example.v3.0.0 skips: ["example.v2.0.0"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0If
1.0.0is installed, OLM v1 behavior differs in the following ways:-
OLM (Classic) will not detect an update path to
v2.0.0becausev2.0.0is skipped and not on thereplaceschain. -
OLM v1 will detect the update path because OLM v1 does not have a concept of a
replaceschain. OLM v1 finds all entries that have areplace,skip, orskipRangevalue that covers the currently installed version.
-
OLM (Classic) will not detect an update path to
5.5.1. Support for version ranges Copia collegamentoCollegamento copiato negli appunti!
In Operator Lifecycle Manager (OLM) v1, you can specify a version range by using a comparison string in an Operator or extension’s custom resource (CR). If you specify a version range in the CR, OLM v1 installs or updates to the latest version of the Operator that can be resolved within the version range.
Resolved version workflow
- The resolved version is the latest version of the Operator that satisfies the constraints of the Operator and the environment.
- An Operator update within the specified range is automatically installed if it is resolved successfully.
- An update is not installed if it is outside of the specified range or if it cannot be resolved successfully.
5.5.2. Version comparison strings Copia collegamentoCollegamento copiato negli appunti!
You can define a version range by adding a comparison string to the spec.version field in an Operator or extension’s custom resource (CR). A comparison string is a list of space- or comma-separated values and one or more comparison operators enclosed in double quotation marks ("). You can add another comparison string by including an OR, or double vertical bar (||), comparison operator between the strings.
| Comparison operator | Definition |
|---|---|
|
| Equal to |
|
| Not equal to |
|
| Greater than |
|
| Less than |
|
| Greater than or equal to |
|
| Less than or equal to |
You can specify a version range in an Operator or extension’s CR by using a range comparison similar to the following example:
Example version range comparison
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
version: ">=1.11, <1.13"
You can use wildcard characters in all types of comparison strings. OLM v1 accepts x, X, and asterisks (*) as wildcard characters. When you use a wildcard character with the equal sign (=) comparison operator, you define a comparison at the patch or minor version level.
| Wildcard comparison | Matching string |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
You can make patch release comparisons by using the tilde (~) comparison operator. Patch release comparisons specify a minor version up to the next major version.
| Patch release comparison | Matching string |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You can use the caret (^) comparison operator to make a comparison for a major release. If you make a major release comparison before the first stable release is published, the minor versions define the API’s level of stability. In the semantic versioning (semver) specification, the first stable release is published as the 1.0.0 version.
| Major release comparison | Matching string |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5.5.3. Example custom resources (CRs) that specify a target version Copia collegamentoCollegamento copiato negli appunti!
In Operator Lifecycle Manager (OLM) v1, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR).
You can define a target version by specifying any of the following fields:
- Channel
- Version number
- Version range
If you specify a channel in the CR, OLM v1 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM v1 automatically updates to the latest release that can be resolved from the channel.
Example CR with a specified channel
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
channels:
- latest
- 1
- Optional: Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Specify the value of the
channelsparameter as an array.
If you specify the Operator or extension’s target version in the CR, OLM v1 installs the specified version. When the target version is specified in the CR, OLM v1 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator’s CR. Specifying an Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
version: "1.11.1"
- 1
- Optional: Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version.
If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM v1 installs the latest version of an Operator or extension that can be resolved by the Operator Controller.
Example CR with a version range specified
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
version: ">1.11.1"
- 1
- Optional: Specifies that the desired version range is greater than version
1.11.1. For more information, see "Support for version ranges".
After you create or update a CR, apply the configuration file by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
5.5.4. Forcing an update or rollback Copia collegamentoCollegamento copiato negli appunti!
OLM v1 does not support automatic updates to the next major version or rollbacks to an earlier version. If you want to perform a major version update or rollback, you must verify and force the update manually.
You must verify the consequences of forcing a manual update or rollback. Failure to verify a forced update or rollback might have catastrophic consequences such as data loss.
Prerequisites
- You have a catalog installed.
- You have an Operator or extension installed.
- You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension you want to install. For more information, see Creating a service account.
Procedure
Edit the custom resource (CR) of your Operator or extension as shown in the following example:
Example CR
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace>1 serviceAccount: name: <service_account_installer_name>2 source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel_name>3 version: <version_or_version_range>4 upgradeConstraintPolicy: SelfCertified5 - 1
- Specifies the namespace where you want the bundle installed, such as
pipelinesormy-extension. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. - 2
- Specifies the name of the service account you created to install, update, and manage your extension.
- 3
- Optional: Specifies channel names as an array, such as
pipelines-1.14orlatest. - 4
- Optional: Specifies the version or version range, such as
1.14.0,1.14.x, or>=1.16, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". - 5
- Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to
SelfCertified. If unspecified, the default setting isCatalogProvided. TheCatalogProvidedsetting only updates if the new version satisfies the upgrade constraints set by the package author.
Apply the changes to your Operator or extensions CR by running the following command:
$ oc apply -f <extension_name>.yaml
5.5.5. Compatibility with OpenShift Container Platform versions Copia collegamentoCollegamento copiato negli appunti!
Before cluster administrators can update their OpenShift Container Platform cluster to its next minor version, they must ensure that all installed Operators are updated to a bundle version that is compatible with the cluster’s next minor version (4.y+1).
For example, Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If an extension is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is updated to the Kubernetes version where the API has been removed.
If an Operator author knows that a specific bundle version is not supported and will not work correctly, for any reason, on OpenShift Container Platform later than a certain cluster minor version, they can configure the maximum version of OpenShift Container Platform that their Operator is compatible with.
In the Operator project’s cluster service version (CSV), authors can set the olm.maxOpenShiftVersion annotation to prevent administrators from updating the cluster before updating the installed Operator to a compatible version.
Example CSV with olm.maxOpenShiftVersion annotation
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
"olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'
- 1
- Specifies the latest minor version of OpenShift Container Platform (4.y) that an Operator is compatible with. For example, setting
valueto4.21prevents cluster updates to minor versions later than 4.21 when this bundle is installed on a cluster.If the
olm.maxOpenShiftVersionfield is omitted, cluster updates are not blocked by this Operator.
When determining a cluster’s next minor version (4.y+1), OLM v1 only considers major and minor versions (x and y) for comparisons. It ignores any z-stream versions (4.y.z), also known as patch releases, or pre-release versions.
For example, if the cluster’s current version is 4.21.0, the next minor version is 4.22. If the current version is 4.21.0-rc1, the next minor version is still 4.22.
5.5.5.1. Cluster updates blocked by olm cluster Operator Copia collegamentoCollegamento copiato negli appunti!
If an installed Operator’s olm.maxOpenShiftVersion field is set and a cluster administrator attempts to update their cluster to a version that the Operator does not provide a valid update path for, the cluster update fails and the Upgradeable status for the olm cluster Operator is set to False.
To resolve the issue, the cluster administrator must either update the installed Operator to a version with a valid update path, if one is available, or they must uninstall the Operator. Then, they can attempt the cluster update again.
5.6. Custom resource definition (CRD) upgrade safety Copia collegamentoCollegamento copiato negli appunti!
When you update a custom resource definition (CRD) that is provided by a cluster extension, Operator Lifecycle Manager (OLM) v1 runs a CRD upgrade safety preflight check to ensure backwards compatibility with previous versions of that CRD. The CRD update must pass the validation checks before the change is allowed to progress on a cluster.
5.6.1. Prohibited CRD upgrade changes Copia collegamentoCollegamento copiato negli appunti!
The following changes to an existing custom resource definition (CRD) are caught by the CRD upgrade safety preflight check and prevent the upgrade:
- A new required field is added to an existing version of the CRD
- An existing field is removed from an existing version of the CRD
- An existing field type is changed in an existing version of the CRD
- A new default value is added to a field that did not previously have a default value
- The default value of a field is changed
- An existing default value of a field is removed
- New enum restrictions are added to an existing field which did not previously have enum restrictions
- Existing enum values from an existing field are removed
- The minimum value of an existing field is increased in an existing version
- The maximum value of an existing field is decreased in an existing version
- Minimum or maximum field constraints are added to a field that did not previously have constraints
The rules for changes to minimum and maximum values apply to minimum, minLength, minProperties, minItems, maximum, maxLength, maxProperties, and maxItems constraints.
The following changes to an existing CRD are reported by the CRD upgrade safety preflight check and prevent the upgrade, though the operations are technically handled by the Kubernetes API server:
-
The scope changes from
ClustertoNamespaceor fromNamespacetoCluster - An existing stored version of the CRD is removed
If the CRD upgrade safety preflight check encounters one of the prohibited upgrade changes, it logs an error for each prohibited change detected in the CRD upgrade.
In cases where a change to the CRD does not fall into one of the prohibited change categories, but is also unable to be properly detected as allowed, the CRD upgrade safety preflight check will prevent the upgrade and log an error for an "unknown change".
5.6.2. Allowed CRD upgrade changes Copia collegamentoCollegamento copiato negli appunti!
The following changes to an existing custom resource definition (CRD) are safe for backwards compatibility and will not cause the CRD upgrade safety preflight check to halt the upgrade:
- Adding new enum values to the list of allowed enum values in a field
- An existing required field is changed to optional in an existing version
- The minimum value of an existing field is decreased in an existing version
- The maximum value of an existing field is increased in an existing version
- A new version of the CRD is added with no modifications to existing versions
5.6.3. Disabling the CRD upgrade safety preflight check Copia collegamentoCollegamento copiato negli appunti!
You can disable the custom resource definition (CRD) upgrade safety preflight check. In the ClusterExtension object that provides the CRD, set the install.preflight.crdUpgradeSafety.enforcement field with the value of None.
Disabling the CRD upgrade safety preflight check could break backwards compatibility with stored versions of the CRD and cause other unintended consequences on the cluster.
You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, you disable all field validators.
If you disable the CRD upgrade safety preflight check in Operator Lifecycle Manager (OLM) v1, the Kubernetes API server still prevents the following operations:
-
Changing scope from
ClustertoNamespaceor fromNamespacetoCluster - Removing an existing stored version of the CRD
Prerequisites
- You have a cluster extension installed.
Procedure
Edit the
ClusterExtensionobject of the CRD:$ oc edit clusterextension <clusterextension_name>Set the
install.preflight.crdUpgradeSafety.enforcementfield toNone:Example
ClusterExtensionobjectapiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: clusterextension-sample spec: namespace: default serviceAccount: name: sa-example source: sourceType: "Catalog" catalog: packageName: argocd-operator version: 0.6.0 install: preflight: crdUpgradeSafety: enforcement: None
5.6.4. Examples of unsafe CRD changes Copia collegamentoCollegamento copiato negli appunti!
The following examples demonstrate specific changes to sections of an example custom resource definition (CRD) that would be caught by the CRD upgrade safety preflight check.
For the following examples, consider a CRD object in the following starting state:
Example 5.19. Example CRD object
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.13.0
name: example.test.example.com
spec:
group: test.example.com
names:
kind: Sample
listKind: SampleList
plural: samples
singular: sample
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
status:
type: object
pollInterval:
type: string
type: object
served: true
storage: true
subresources:
status: {}
5.6.4.1. Scope change Copia collegamentoCollegamento copiato negli appunti!
In the following custom resource definition (CRD) example, the scope field is changed from Namespaced to Cluster:
Example 5.20. Example scope change in a CRD
spec:
group: test.example.com
names:
kind: Sample
listKind: SampleList
plural: samples
singular: sample
scope: Cluster
versions:
- name: v1alpha1
Example 5.21. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoScopeChange" validation failed: scope changed from "Namespaced" to "Cluster"
5.6.4.2. Removal of a stored version Copia collegamentoCollegamento copiato negli appunti!
In the following custom resource definition (CRD) example, the existing stored version, v1alpha1, is removed:
Example 5.22. Example removal of a stored version in a CRD
versions:
- name: v1alpha2
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
status:
type: object
pollInterval:
type: string
type: object
Example 5.23. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoStoredVersionRemoved" validation failed: stored version "v1alpha1" removed
5.6.4.3. Removal of an existing field Copia collegamentoCollegamento copiato negli appunti!
In the following custom resource definition (CRD) example, the pollInterval property field is removed from the v1alpha1 schema:
Example 5.24. Example removal of an existing field in a CRD
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
status:
type: object
type: object
Example 5.25. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoExistingFieldRemoved" validation failed: crd/test.example.com version/v1alpha1 field/^.spec.pollInterval may not be removed
5.6.4.4. Addition of a required field Copia collegamentoCollegamento copiato negli appunti!
In the following custom resource definition (CRD) example, the pollInterval property has been changed to a required field:
Example 5.26. Example addition of a required field in a CRD
versions:
- name: v1alpha2
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
status:
type: object
pollInterval:
type: string
type: object
required:
- pollInterval
Example 5.27. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "ChangeValidator" validation failed: version "v1alpha1", field "^": new required fields added: [pollInterval]