Chapter 10. Cluster extensions
10.1. Managing cluster extensions
After a catalog has been added to your cluster, you have access to the versions, patches, and over-the-air updates of the extensions and Operators that are published to the catalog.
You can use custom resources (CRs) to manage extensions declaratively from the CLI.
10.1.1. Supported extensions
Currently, Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions that meet all of the following criteria:
-
The extension must use the
registry+v1
bundle format introduced in OLM (Classic). -
The extension must support installation via the
AllNamespaces
install mode. - The extension must not use webhooks.
The extension must not declare dependencies by using any of the following file-based catalog properties:
-
olm.gvk.required
-
olm.package.required
-
olm.constraint
-
OLM v1 checks that the extension you want to install meets these constraints. If the extension that you want to install does not meet these constraints, an error message is printed in the cluster extension’s conditions.
Operator Lifecycle Manager (OLM) v1 does not support the OperatorConditions
API introduced in OLM (Classic).
If an extension relies on only the OperatorConditions
API to manage updates, the extension might not install correctly. Most extensions that rely on this API fail at start time, but some might fail during reconciliation.
As a workaround, you can pin your extension to a specific version. When you want to update your extension, consult the extension’s documentation to find out when it is safe to pin the extension to a new version.
Additional resources
10.1.2. Finding Operators to install from a catalog
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.
Currently in Operator Lifecycle Manager (OLM) v1, you cannot query on-cluster catalogs managed by catalogd. In OLM v1, you must use the opm
and jq
CLI tools to query the catalog registry.
Prerequisites
- You have added a catalog to your cluster.
-
You have installed the
jq
CLI tool. -
You have installed the
opm
CLI tool.
Procedure
To return a list of extensions that support the
AllNamespaces
install mode and do not use webhooks, enter the following command:$ opm render <catalog_registry_url>:<tag> \ | jq -cs '[.[] | select(.schema == "olm.bundle" \ and (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) | .package] | unique[]'
where:
catalog_registry_url
-
Specifies the URL of the catalog registry, such as
registry.redhat.io/redhat/redhat-operator-index
. tag
Specifies the tag or version of the catalog, such as
v4.18
orlatest
.Example 10.1. Example command
$ opm render \ registry.redhat.io/redhat/redhat-operator-index:v4.18 \ | jq -cs '[.[] | select(.schema == "olm.bundle" \ and (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) | .package] | unique[]'
Example 10.2. Example output
"3scale-operator" "amq-broker-rhel8" "amq-online" "amq-streams" "amq-streams-console" "ansible-automation-platform-operator" "ansible-cloud-addons-operator" "apicast-operator" "authorino-operator" "aws-load-balancer-operator" "bamoe-kogito-operator" "cephcsi-operator" "cincinnati-operator" "cluster-logging" "cluster-observability-operator" "compliance-operator" "container-security-operator" "cryostat-operator" "datagrid" "devspaces" ...
Inspect the contents of an extension’s metadata by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.package") \ | select( .name == "<package_name>")'
Example 10.3. Example command
$ opm render \ registry.redhat.io/redhat/redhat-operator-index:v4.18 \ | jq -s '.[] | select( .schema == "olm.package") \ | select( .name == "openshift-pipelines-operator-rh")'
Example 10.4. Example output
{ "schema": "olm.package", "name": "openshift-pipelines-operator-rh", "defaultChannel": "latest", "icon": { "base64data": "iVBORw0KGgoAAAANSUhE...", "mediatype": "image/png" } }
10.1.2.1. Common catalog queries
You can query catalogs by using the opm
and jq
CLI tools. The following tables show common catalog queries that you can use when installing, updating, and managing the lifecycle of extensions.
Command syntax
$ opm render <catalog_registry_url>:<tag> | <jq_request>
where:
catalog_registry_url
-
Specifies the URL of the catalog registry, such as
registry.redhat.io/redhat/redhat-operator-index
. tag
-
Specifies the tag or version of the catalog, such as
v4.18
orlatest
. jq_request
- Specifies the query you want to run on the catalog.
Example 10.5. Example command
$ opm render \ registry.redhat.io/redhat/redhat-operator-index:v4.18 \ | jq -cs '[.[] | select(.schema == "olm.bundle" and (.properties[] \ | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) \ | .package] | unique[]'
Query | Request |
---|---|
Available packages in a catalog |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.package")' |
Packages that support |
$ opm render <catalog_registry_url>:<tag> \ | jq -cs '[.[] | select(.schema == "olm.bundle" and (.properties[] \ | select(.type == "olm.csv.metadata").value.installModes[] \ | select(.type == "AllNamespaces" and .supported == true)) \ and .spec.webhookdefinitions == null) \ | .package] | unique[]' |
Package metadata |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.package") \ | select( .name == "<package_name>")' |
Catalog blobs in a package |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .package == "<package_name>")' |
Query | Request |
---|---|
Channels in a package |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select( .package == "<package_name>") | .name' |
Versions in a channel |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .package == "<package_name>" ) \ | select( .schema == "olm.channel" ) \ | select( .name == "<channel_name>" ) .entries \ | .[] | .name' |
|
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select ( .name == "<channel_name>") \ | select( .package == "<package_name>")' |
Query | Request |
---|---|
Bundles in a package |
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.bundle" ) \ | select( .package == "<package_name>") | .name' |
|
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.bundle" ) \ | select ( .name == "<bundle_name>") \ | select( .package == "<package_name>")' |
10.1.3. Cluster extension permissions
In Operator Lifecycle Manager (OLM) Classic, a single service account with cluster administrator privileges manages all cluster extensions.
OLM v1 is designed to be more secure than OLM (Classic) by default. OLM v1 manages a cluster extension by using the service account specified in an extension’s custom resource (CR). Cluster administrators can create a service account for each cluster extension. As a result, administrators can follow the principle of least privilege and assign only the role-based access controls (RBAC) to install and manage that extension.
You must add each permission to either a cluster role or role. Then you must bind the cluster role or role to the service account with a cluster role binding or role binding.
You can scope the RBAC to either the cluster or to a namespace. Use cluster roles and cluster role bindings to scope permissions to the cluster. Use roles and role bindings to scope permissions to a namespace. Whether you scope the permissions to the cluster or to a namespace depends on the design of the extension you want to install and manage.
To simply the following procedure and improve readability, the following example manifest uses permissions that are scoped to the cluster. You can further restrict some of the permissions by scoping them to the namespace of the extension instead of the cluster.
If a new version of an installed extension requires additional permissions, OLM v1 halts the update process until a cluster administrator grants those permissions.
10.1.3.1. Creating a namespace
Before you create a service account to install and manage your cluster extension, you must create a namespace.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
Create a new namespace for the service account of the extension that you want to install by running the following command:
$ oc adm new-project <new_namespace>
10.1.3.2. Creating a service account for an extension
You must create a service account to install, manage, and update a cluster extension.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
Create a service account, similar to the following example:
apiVersion: v1 kind: ServiceAccount metadata: name: <extension>-installer namespace: <namespace>
Example 10.6. Example
extension-service-account.yaml
fileapiVersion: v1 kind: ServiceAccount metadata: name: pipelines-installer namespace: pipelines
Apply the service account by running the following command:
$ oc apply -f extension-service-account.yaml
10.1.3.3. Downloading the bundle manifests of an extension
Use the opm
CLI tool to download the bundle manifests of the extension that you want to install. Use the CLI tool or text editor of your choice to view the manifests and find the required permissions to install and manage the extension.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - You have decided which extension you want to install.
-
You have installed the
opm
CLI tool.
Procedure
Inspect the available versions and images of the extension you want to install by running the following command:
$ opm render <registry_url>:<tag_or_version> | \ jq -cs '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "<extension_name>") | \ {"name":.name, "image":.image}'
Example 10.7. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | \ jq -cs '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ {"name":.name, "image":.image}'
Example 10.8. Example output
{"name":"openshift-pipelines-operator-rh.v1.14.3","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:3f64b29f6903981470d0917b2557f49d84067bccdba0544bfe874ec4412f45b0"} {"name":"openshift-pipelines-operator-rh.v1.14.4","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec"} {"name":"openshift-pipelines-operator-rh.v1.14.5","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838"} {"name":"openshift-pipelines-operator-rh.v1.15.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:22be152950501a933fe6e1df0e663c8056ca910a89dab3ea801c3bb2dc2bf1e6"} {"name":"openshift-pipelines-operator-rh.v1.15.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:64afb32e3640bb5968904b3d1a317e9dfb307970f6fda0243e2018417207fd75"} {"name":"openshift-pipelines-operator-rh.v1.15.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd"} {"name":"openshift-pipelines-operator-rh.v1.16.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:a46b7990c0ad07dae78f43334c9bd5e6cba7b50ca60d3f880099b71e77bed214"} {"name":"openshift-pipelines-operator-rh.v1.16.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:29f27245e93b3f605647993884751c490c4a44070d3857a878d2aee87d43f85b"} {"name":"openshift-pipelines-operator-rh.v1.16.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2037004666526c90329f4791f14cb6cc06e8775cb84ba107a24cc4c2cf944649"} {"name":"openshift-pipelines-operator-rh.v1.17.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:d75065e999826d38408049aa1fde674cd1e45e384bfdc96523f6bad58a0e0dbc"}
Make a directory to extract the image of the bundle that you want to install by running the following command:
$ mkdir <new_dir>
Change into the directory by running the following command:
$ cd <new_dir>
Find the image reference of the version that you want to install and run the following command:
$ oc image extract <full_path_to_registry_image>@sha256:<sha>
Example command
$ oc image extract registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838
Change into the
manifests
directory by running the following command:$ cd manifests
View the contents of the manifests directory by entering the following command. The output lists the manifests of the resources required to install, manage, and operate your extension.
$ tree
Example 10.9. Example output
. ├── manifests │ ├── config-logging_v1_configmap.yaml │ ├── openshift-pipelines-operator-monitor_monitoring.coreos.com_v1_servicemonitor.yaml │ ├── openshift-pipelines-operator-prometheus-k8s-read-binding_rbac.authorization.k8s.io_v1_rolebinding.yaml │ ├── openshift-pipelines-operator-read_rbac.authorization.k8s.io_v1_role.yaml │ ├── openshift-pipelines-operator-rh.clusterserviceversion.yaml │ ├── operator.tekton.dev_manualapprovalgates.yaml │ ├── operator.tekton.dev_openshiftpipelinesascodes.yaml │ ├── operator.tekton.dev_tektonaddons.yaml │ ├── operator.tekton.dev_tektonchains.yaml │ ├── operator.tekton.dev_tektonconfigs.yaml │ ├── operator.tekton.dev_tektonhubs.yaml │ ├── operator.tekton.dev_tektoninstallersets.yaml │ ├── operator.tekton.dev_tektonpipelines.yaml │ ├── operator.tekton.dev_tektonresults.yaml │ ├── operator.tekton.dev_tektontriggers.yaml │ ├── tekton-config-defaults_v1_configmap.yaml │ ├── tekton-config-observability_v1_configmap.yaml │ ├── tekton-config-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml │ ├── tekton-config-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ ├── tekton-operator-controller-config-leader-election_v1_configmap.yaml │ ├── tekton-operator-info_rbac.authorization.k8s.io_v1_rolebinding.yaml │ ├── tekton-operator-info_rbac.authorization.k8s.io_v1_role.yaml │ ├── tekton-operator-info_v1_configmap.yaml │ ├── tekton-operator_v1_service.yaml │ ├── tekton-operator-webhook-certs_v1_secret.yaml │ ├── tekton-operator-webhook-config-leader-election_v1_configmap.yaml │ ├── tekton-operator-webhook_v1_service.yaml │ ├── tekton-result-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml │ └── tekton-result-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml ├── metadata │ ├── annotations.yaml │ └── properties.yaml └── root └── buildinfo ├── content_manifests │ └── openshift-pipelines-operator-bundle-container-v1.16.2-3.json └── Dockerfile-openshift-pipelines-pipelines-operator-bundle-container-v1.16.2-3
Next steps
-
View the contents of the
install.spec.clusterpermissions
stanza of cluster service version (CSV) file in themanifests
directory using your preferred CLI tool or text editor. The following examples reference theopenshift-pipelines-operator-rh.clusterserviceversion.yaml
file of the Red Hat OpenShift Pipelines Operator. - Keep this file open as a reference while assigning permissions to the cluster role file in the following procedure.
10.1.3.4. Required permissions to install and manage a cluster extension
You must inspect the manifests included in the bundle image of a cluster extension to assign the necessary permissions. The service account requires enough role-based access controls (RBAC) to create and manage the following resources.
Follow the principle of least privilege and scope permissions to specific resource names with the least RBAC required to run.
- Admission plugins
-
Because OpenShift Container Platform clusters use the
OwnerReferencesPermissionEnforcement
admission plugin, cluster extensions must have permissions to update theblockOwnerDeletion
andownerReferences
finalizers. - Cluster role and cluster role bindings for the controllers of the extension
- You must define RBAC so that the installation service account can create and manage cluster roles and cluster role bindings for the extension controllers.
- Cluster service version (CSV)
- You must define RBAC for the resources defined in the CSV of the cluster extension.
- Cluster-scoped bundle resources
-
You must define RBAC to create and manage any cluster-scoped resources included in the bundle. If the cluster-scoped resources matches another resource type, such as a
ClusterRole
, you can add the resource to the pre-existing rule under theresources
orresourceNames
field. - Custom resource definitions (CRDs)
- You must define RBAC so that the installation service account can create and manage the CRDs for the extension. Also, you must grant the service account for the controller of the extension the RBAC to manage its CRDs.
- Deployments
- You must define RBAC for the installation service account to create and manage the deployments needed by the extension controller, such as services and config maps.
- Extension permissions
- You must include RBAC for the permissions and cluster permissions defined in the CSV. The installation service account needs the ability to grant these permissions to the extension controller, which needs these permissions to run.
- Namespace-scoped bundle resources
- You must define RBAC for any namespace-scoped bundle resources. The installation service account requires permission to create and manage resources, such as config maps or services.
- Roles and role bindings
- You must define RBAC for any roles or role bindings defined in the CSV. The installation service account needs permission to create and manage those roles and role bindings.
- Service accounts
- You must define RBAC so that the installation service account can create and manage the service accounts for the extension controllers.
10.1.3.5. Creating a cluster role for an extension
You must review the install.spec.clusterpermissions
stanza of the cluster service version (CSV) and the manifests of an extension carefully to define the required role-based access controls (RBAC) of the extension that you want to install. You must create a cluster role by copying the required RBAC from the CSV to the new manifest.
If you want to test the process for installing and updating an extension in OLM v1, you can use the following cluster role to grant cluster administrator permissions. This manifest is for testing purposes only. It should not be used in production clusters.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterrole rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"]
The following procedure uses the openshift-pipelines-operator-rh.clusterserviceversion.yaml
file of the Red Hat OpenShift Pipelines Operator as an example. The examples include excerpts of the RBAC required to install and manage the OpenShift Pipelines Operator. For a complete manifest, see "Example cluster role for the Red Hat OpenShift Pipelines Operator".
To simply the following procedure and improve readability, the following example manifest uses permissions that are scoped to the cluster. You can further restrict some of the permissions by scoping them to the namespace of the extension instead of the cluster.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - You have downloaded the manifests in the image reference of the extension that you want to install.
Procedure
Create a new cluster role manifest, similar to the following example:
Example
<extension>-cluster-role.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterrole
Edit your cluster role manifest to include permission to update finalizers on the extension, similar to the following example:
Example <extension>-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: - olm.operatorframework.io resources: - clusterextensions/finalizers verbs: - update # Scoped to the name of the ClusterExtension resourceNames: - <metadata_name> 1
- 1
- Specifies the value from the
metadata.name
field from the custom resource (CR) of the extension.
Search for the
clusterrole
andclusterrolebindings
values in therules.resources
field in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
Example cluster role manifest
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # ClusterRoles and ClusterRoleBindings for the controllers of the extension - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - create 1 - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - get - update - patch - delete resourceNames: 2 - "*" - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - create - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - get - update - patch - delete resourceNames: - "*" # ...
- 1
- You cannot scope
create
,list
, andwatch
permissions to specific resource names (theresourceNames
field). You must scope these permissions to their resources (theresources
field). - 2
- Some resource names are generated by using the following format:
<package_name>.<hash>
. After you install the extension, look up the resource names for the cluster roles and cluster role bindings for the controller of the extension. Replace the wildcard characters in this example with the generated names and follow the principle of least privilege.
Search for the
customresourcedefinitions
value in therules.resources
field in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Custom resource definitions of the extension - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - update - patch - delete resourceNames: - manualapprovalgates.operator.tekton.dev - openshiftpipelinesascodes.operator.tekton.dev - tektonaddons.operator.tekton.dev - tektonchains.operator.tekton.dev - tektonconfigs.operator.tekton.dev - tektonhubs.operator.tekton.dev - tektoninstallersets.operator.tekton.dev - tektonpipelines.operator.tekton.dev - tektonresults.operator.tekton.dev - tektontriggers.operator.tekton.dev # ...
Search the CSV file for stanzas with the
permissions
andclusterPermissions
values in therules.resources
spec.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Excerpt from install.spec.clusterPermissions - apiGroups: - '' resources: - nodes - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets - pods/log - limitranges verbs: - create - list - watch - delete - deletecollection - patch - get - update - apiGroups: - extensions - apps resources: - ingresses - ingresses/status verbs: - create - list - watch - delete - patch - get - update # ...
Search the CSV file for resources under the
install.spec.deployments
stanza.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Excerpt from install.spec.deployments - apiGroups: - apps resources: - deployments verbs: - create - list - watch - apiGroups: - apps resources: - deployments verbs: - get - update - patch - delete # scoped to the extension controller deployment name resourceNames: - openshift-pipelines-operator - tekton-operator-webhook # ...
Search for the
services
andconfigmaps
values in therules.resources
field in the extension’s CSV file.Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: # ... # Services - apiGroups: - "" resources: - services verbs: - create - apiGroups: - "" resources: - services verbs: - get - list - watch - update - patch - delete # scoped to the service name resourceNames: - openshift-pipelines-operator-monitor - tekton-operator - tekton-operator-webhook # configmaps - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - configmaps verbs: - get - list - watch - update - patch - delete # scoped to the configmap name resourceNames: - config-logging - tekton-config-defaults - tekton-config-observability - tekton-operator-controller-config-leader-election - tekton-operator-info - tekton-operator-webhook-config-leader-election - apiGroups: - operator.tekton.dev resources: - tekton-config-read-role - tekton-result-read-role verbs: - get - watch - list
Add the cluster role manifest to the cluster by running the following command:
$ oc apply -f <extension>-installer-clusterrole.yaml
Example command
$ oc apply -f pipelines-installer-clusterrole.yaml
10.1.3.6. Example cluster role for the Red Hat OpenShift Pipelines Operator
See the following example for a complete cluster role manifest for the OpenShift Pipelines Operator.
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: - olm.operatorframework.io resources: - clusterextensions/finalizers verbs: - update # Scoped to the name of the ClusterExtension resourceNames: - pipes # the value from <metadata.name> from the extension's custom resource (CR) # ClusterRoles and ClusterRoleBindings for the controllers of the extension - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - create - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles verbs: - get - update - patch - delete resourceNames: - "*" - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - create - list - watch - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings verbs: - get - update - patch - delete resourceNames: - "*" # Extension's custom resource definitions - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - update - patch - delete resourceNames: - manualapprovalgates.operator.tekton.dev - openshiftpipelinesascodes.operator.tekton.dev - tektonaddons.operator.tekton.dev - tektonchains.operator.tekton.dev - tektonconfigs.operator.tekton.dev - tektonhubs.operator.tekton.dev - tektoninstallersets.operator.tekton.dev - tektonpipelines.operator.tekton.dev - tektonresults.operator.tekton.dev - tektontriggers.operator.tekton.dev - apiGroups: - '' resources: - nodes - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets - pods/log - limitranges verbs: - create - list - watch - delete - deletecollection - patch - get - update - apiGroups: - extensions - apps resources: - ingresses - ingresses/status verbs: - create - list - watch - delete - patch - get - update - apiGroups: - '' resources: - namespaces verbs: - get - list - create - update - delete - patch - watch - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets - deployments/finalizers verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - monitoring.coreos.com resources: - servicemonitors verbs: - get - create - delete - apiGroups: - rbac.authorization.k8s.io resources: - clusterroles - roles verbs: - delete - deletecollection - create - patch - get - list - update - watch - bind - escalate - apiGroups: - '' resources: - serviceaccounts verbs: - get - list - create - update - delete - patch - watch - impersonate - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - rolebindings verbs: - get - update - delete - patch - create - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions - customresourcedefinitions/status verbs: - get - create - update - delete - list - patch - watch - apiGroups: - admissionregistration.k8s.io resources: - mutatingwebhookconfigurations - validatingwebhookconfigurations verbs: - get - list - create - update - delete - patch - watch - apiGroups: - build.knative.dev resources: - builds - buildtemplates - clusterbuildtemplates verbs: - get - list - create - update - delete - patch - watch - apiGroups: - extensions resources: - deployments verbs: - get - list - create - update - delete - patch - watch - apiGroups: - extensions resources: - deployments/finalizers verbs: - get - list - create - update - delete - patch - watch - apiGroups: - operator.tekton.dev resources: - '*' - tektonaddons verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - tekton.dev - triggers.tekton.dev - operator.tekton.dev - pipelinesascode.tekton.dev resources: - '*' verbs: - add - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - dashboard.tekton.dev resources: - '*' - tektonaddons verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use - get - list - create - update - delete - apiGroups: - events.k8s.io resources: - events verbs: - create - apiGroups: - route.openshift.io resources: - routes verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - coordination.k8s.io resources: - leases verbs: - get - list - create - update - delete - patch - watch - apiGroups: - console.openshift.io resources: - consoleyamlsamples - consoleclidownloads - consolequickstarts - consolelinks verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - delete - create - patch - get - list - update - watch - apiGroups: - policy resources: - poddisruptionbudgets verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - monitoring.coreos.com resources: - servicemonitors verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - delete - deletecollection - create - patch - get - list - update - watch - apiGroups: - '' resources: - namespaces/finalizers verbs: - update - apiGroups: - resolution.tekton.dev resources: - resolutionrequests - resolutionrequests/status verbs: - get - list - watch - create - delete - update - patch - apiGroups: - console.openshift.io resources: - consoleplugins verbs: - get - list - watch - create - delete - update - patch # Deployments specified in install.spec.deployments - apiGroups: - apps resources: - deployments verbs: - create - list - watch - apiGroups: - apps resources: - deployments verbs: - get - update - patch - delete # scoped to the extension controller deployment name resourceNames: - openshift-pipelines-operator - tekton-operator-webhook # Service accounts in the CSV - apiGroups: - "" resources: - serviceaccounts verbs: - create - list - watch - apiGroups: - "" resources: - serviceaccounts verbs: - get - update - patch - delete # scoped to the extension controller's deployment service account resourceNames: - openshift-pipelines-operator # Services - apiGroups: - "" resources: - services verbs: - create - apiGroups: - "" resources: - services verbs: - get - list - watch - update - patch - delete # scoped to the service name resourceNames: - openshift-pipelines-operator-monitor - tekton-operator - tekton-operator-webhook # configmaps - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - configmaps verbs: - get - list - watch - update - patch - delete # scoped to the configmap name resourceNames: - config-logging - tekton-config-defaults - tekton-config-observability - tekton-operator-controller-config-leader-election - tekton-operator-info - tekton-operator-webhook-config-leader-election - apiGroups: - operator.tekton.dev resources: - tekton-config-read-role - tekton-result-read-role verbs: - get - watch - list ---
10.1.3.7. Creating a cluster role binding for an extension
After you have created a service account and cluster role, you must bind the cluster role to the service account with a cluster role binding manifest.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. You have created and applied the following resources for the extension you want to install:
- Namespace
- Service account
- Cluster role
Procedure
Create a cluster role binding to bind the cluster role to the service account, similar to the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <extension>-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <extension>-installer-clusterrole subjects: - kind: ServiceAccount name: <extension>-installer namespace: <namespace>
Example 10.10. Example
pipelines-cluster-role-binding.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pipelines-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-installer-clusterrole subjects: - kind: ServiceAccount name: pipelines-installer namespace: pipelines
Apply the cluster role binding by running the following command:
$ oc apply -f pipelines-cluster-role-binding.yaml
10.1.4. Installing a cluster extension from a catalog
You can install an extension from a catalog by creating a custom resource (CR) and applying it to the cluster. Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions, including OLM (Classic) Operators in the registry+v1
bundle format, that are scoped to the cluster. For more information, see Supported extensions.
Prerequisites
- You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension that you want to install. For more information, see "Cluster extension permissions".
Procedure
Create a CR, similar to the following example:
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> 1 serviceAccount: name: <service_account_installer_name> 2 source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel_name> 3 version: <version_or_version_range> 4 upgradeConstraintPolicy: CatalogProvided 5
- 1
- Specifies the namespace where you want the bundle installed, such as
pipelines
ormy-extension
. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. - 2
- Specifies the name of the service account you created to install, update, and manage your extension.
- 3
- Optional: Specifies channel names as an array, such as
pipelines-1.14
orlatest
. - 4
- Optional: Specifies the version or version range, such as
1.14.0
,1.14.x
, or>=1.16
, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". - 5
- Optional: Specifies the upgrade constraint policy. If unspecified, the default setting is
CatalogProvided
. TheCatalogProvided
setting only updates if the new version satisfies the upgrade constraints set by the package author. To force an update or rollback, set the field toSelfCertified
. For more information, see "Forcing an update or rollback".
Example pipelines-operator.yaml
CR
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh version: "1.14.x"
Apply the CR to the cluster by running the following command:
$ oc apply -f pipeline-operator.yaml
Example output
clusterextension.olm.operatorframework.io/pipelines-operator created
Verification
View the Operator or extension’s CR in the YAML format by running the following command:
$ oc get clusterextension pipelines-operator -o yaml
Example 10.11. Example output
apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 name: pipelines-operator resourceVersion: "72725" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 1.14.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 successfully observedGeneration: 1 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 1 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.14.5 version: 1.14.5 kind: List metadata: resourceVersion: ""
where:
spec.channel
- Displays the channel defined in the CR of the extension.
spec.version
- Displays the version or version range defined in the CR of the extension.
status.conditions
- Displays information about the status and health of the extension.
type: Deprecated
Displays whether one or more of following are deprecated:
type: PackageDeprecated
- Displays whether the resolved package is deprecated.
type: ChannelDeprecated
- Displays whether the resolved channel is deprecated.
type: BundleDeprecated
- Displays whether the resolved bundle is deprecated.
The value of
False
in thestatus
field indicates that thereason: Deprecated
condition is not deprecated. The value ofTrue
in thestatus
field indicates that thereason: Deprecated
condition is deprecated.installedBundle.name
- Displays the name of the bundle installed.
installedBundle.version
- Displays the version of the bundle installed.
10.1.5. Updating a cluster extension
You can update your cluster extension or Operator by manually editing the custom resource (CR) and applying the changes.
Prerequisites
- You have an Operator or extension installed.
-
You have installed the
jq
CLI tool. -
You have installed the
opm
CLI tool.
Procedure
Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps:
Get a list of channels from a selected package by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select( .package == "openshift-pipelines-operator-rh") | .name'
Example 10.12. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 \ | jq -s '.[] | select( .schema == "olm.channel" ) \ | select( .package == "openshift-pipelines-operator-rh") | .name'
Example 10.13. Example output
"latest" "pipelines-1.14" "pipelines-1.15" "pipelines-1.16" "pipelines-1.17"
Get a list of the versions published in a channel by running the following command:
$ opm render <catalog_registry_url>:<tag> \ | jq -s '.[] | select( .package == "<package_name>" ) \ | select( .schema == "olm.channel" ) \ | select( .name == "<channel_name>" ) | .entries \ | .[] | .name'
Example 10.14. Example command
$ opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 \ | jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) \ | select( .schema == "olm.channel" ) | select( .name == "latest" ) \ | .entries | .[] | .name'
Example 10.15. Example output
"openshift-pipelines-operator-rh.v1.15.0" "openshift-pipelines-operator-rh.v1.16.0" "openshift-pipelines-operator-rh.v1.17.0" "openshift-pipelines-operator-rh.v1.17.1"
Find out what version or channel is specified in your Operator or extension’s CR by running the following command:
$ oc get clusterextension <operator_name> -o yaml
Example command
$ oc get clusterextension pipelines-operator -o yaml
Example 10.16. Example output
apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 name: pipelines-operator resourceVersion: "72725" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 1.14.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 successfully observedGeneration: 1 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 1 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.14.5 version: 1.14.5 kind: List metadata: resourceVersion: ""
Edit your CR by using one of the following methods:
If you want to pin your Operator or extension to specific version, such as
1.15.0
, edit your CR similar to the following example:Example
pipelines-operator.yaml
CRapiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh version: "1.15.0" 1
- 1
- Update the version from
1.14.x
to1.15.0
If you want to define a range of acceptable update versions, edit your CR similar to the following example:
Example CR with a version range specified
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh version: ">1.15, <1.17" 1
- 1
- Specifies that the desired version range is greater than version
1.15
and less than1.17
. For more information, see "Support for version ranges" and "Version comparison strings".
If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example:
Example CR with a specified channel
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh channels: - latest 1
- 1
- Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Enter values as an array.
If you want to specify a channel and version or version range, edit your CR similar to the following example:
Example CR with a specified channel and version range
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: namespace: pipelines serviceAccount: name: pipelines-installer source: sourceType: Catalog catalog: packageName: openshift-pipelines-operator-rh channels: - latest version: "<1.16"
For more information, see "Example custom resources (CRs) that specify a target version".
Apply the update to the cluster by running the following command:
$ oc apply -f pipelines-operator.yaml
Example output
clusterextension.olm.operatorframework.io/pipelines-operator configured
Verification
Verify that the channel and version updates have been applied by running the following command:
$ oc get clusterextension pipelines-operator -o yaml
Example 10.17. Example output
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"\u003c1.16"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 2 name: pipes resourceVersion: "90693" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: <1.16 sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd successfully observedGeneration: 2 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: desired state reached observedGeneration: 2 reason: Succeeded status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.15.2 version: 1.15.2
Troubleshooting
If you specify a target version or channel that is deprecated or does not exist, you can run the following command to check the status of your extension:
$ oc get clusterextension <operator_name> -o yaml
Example 10.18. Example output for a version that does not exist
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"9.x"},"sourceType":"Catalog"}}} creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache - olm.operatorframework.io/cleanup-contentmanager-cache generation: 3 name: pipes resourceVersion: "93334" uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: namespace: pipelines serviceAccount: name: pipelines-installer source: catalog: packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: CatalogProvided version: 9.x sourceType: Catalog status: conditions: - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2025-02-18T21:48:16Z" message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd successfully observedGeneration: 3 reason: Succeeded status: "True" type: Installed - lastTransitionTime: "2025-02-18T21:48:16Z" message: 'error upgrading from currently installed version "1.15.2": no bundles found for package "openshift-pipelines-operator-rh" matching version "9.x"' observedGeneration: 3 reason: Retrying status: "True" type: Progressing install: bundle: name: openshift-pipelines-operator-rh.v1.15.2 version: 1.15.2
Additional resources
10.1.6. Deleting an Operator
You can delete an Operator and its custom resource definitions (CRDs) by deleting the ClusterExtension
custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
$ oc delete clusterextension <operator_name>
Example output
clusterextension.olm.operatorframework.io "<operator_name>" deleted
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
$ oc get clusterextensions
Example output
No resources found
Verify that the Operator’s system namespace is deleted by running the following command:
$ oc get ns <operator_name>-system
Example output
Error from server (NotFound): namespaces "<operator_name>-system" not found
10.2. User access to extension resources
After a cluster extension has been installed and is being managed by Operator Lifecycle Manager (OLM) v1, the extension can often provide CustomResourceDefinition
objects (CRDs) that expose new API resources on the cluster. Cluster administrators typically have full management access to these resources by default, whereas non-cluster administrator users, or regular users, might lack sufficient permissions.
OLM v1 does not automatically configure or manage role-based access control (RBAC) for regular users to interact with the APIs provided by installed extensions. Cluster administrators must define the required RBAC policy to create, view, or edit these custom resources (CRs) for such users.
The RBAC permissions described for user access to extension resources are different from the permissions that must be added to a service account to enable OLM v1-based initial installation of a cluster extension itself. For more on RBAC requirements while installing an extension, see "Cluster extension permissions" in "Managing extensions".
Additional resources
10.2.1. Common default cluster roles for users
An installed cluster extension might include default cluster roles to determine role-based access control (RBAC) for regular users to API resources provided by the extension. A common set of cluster roles can resemble the following policies:
view
cluster role- Grants read-only access to all custom resource (CR) objects of specified API resources across the cluster. Intended for regular users who require visibility into the resources without any permissions to modify them. Ideal for monitoring purposes and limited access viewing.
edit
cluster role- Allows users to modify all CR objects within the cluster. Enables users to create, update, and delete resources, making it suitable for team members who must manage resources but should not control RBAC or manage permissions for others.
admin
cluster role-
Provides full permissions, including
create
,update
, anddelete
verbs, over all custom resource objects for the specified API resources across the cluster.
Additional resources
- User-facing roles (Kubernetes documentation)
10.2.2. Finding API groups and resources exposed by a cluster extension
To create appropriate RBAC policies for granting user access to cluster extension resources, you must know which API groups and resources are exposed by the installed extension. As an administrator, you can inspect custom resource definitions (CRDs) installed on the cluster by using OpenShift CLI (oc
).
Prerequisites
- A cluster extension has been installed on your cluster.
Procedure
To list installed CRDs while specifying a label selector targeting a specific cluster extension by name to find only CRDs owned by that extension, run the following command:
$ oc get crds -l 'olm.operatorframework.io/owner-kind=ClusterExtension,olm.operatorframework.io/owner-name=<cluster_extension_name>'
Alternatively, you can search through all installed CRDs and individually inspect them by CRD name:
List all available custom resource definitions (CRDs) currently installed on the cluster by running the following command:
$ oc get crds
Find the CRD you are looking for in the output.
Inspect the individual CRD further to find its API groups by running the following command:
$ oc get crd <crd_name> -o yaml
10.2.3. Granting user access to extension resources by using custom role bindings
As a cluster administrator, you can manually create and configure role-based access control (RBAC) policies to grant user access to extension resources by using custom role bindings.
Prerequisites
- A cluster extension has been installed on your cluster.
- You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension".
Procedure
If the installed cluster extension does not provide default cluster roles, manually create one or more roles:
Consider the use cases for the set of roles described in "Common default cluster roles for users".
For example, create one or more of the following
ClusterRole
object definitions, replacing<cluster_extension_api_group>
and<cluster_extension_custom_resource>
with the actual API group and resource names provided by the installed cluster extension:Example
view-custom-resource.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - get - list - watch
Example
edit-custom-resource.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: edit-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - get - list - watch - create - update - patch - delete
Example
admin-custom-resource.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: admin-custom-resource rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resources> verbs: - '*' 1
- 1
- Setting a wildcard (
*
) inverbs
allows all actions on the specified resources.
Create the cluster roles by running the following command for any YAML files you created:
$ oc create -f <filename>.yaml
Associate a cluster role to specific users or groups to grant them the necessary permissions for the resource by binding the cluster roles to individual user or group names:
Create an object definition for either a cluster role binding to grant access across all namespaces or a role binding to grant access within a specific namespace:
The following example cluster role bindings grant read-only
view
access to the custom resource across all namespaces:Example
ClusterRoleBinding
object for a userapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: view-custom-resource-binding subjects: - kind: User name: <user_name> roleRef: kind: ClusterRole name: view-custom-resource apiGroup: rbac.authorization.k8s.io
Example
ClusterRoleBinding
object for a userapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: view-custom-resource-binding subjects: - kind: Group name: <group_name> roleRef: kind: ClusterRole name: view-custom-resource apiGroup: rbac.authorization.k8s.io
The following role binding restricts
edit
permissions to a specific namespace:Example
RoleBinding
object for a userapiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit-custom-resource-edit-binding namespace: <namespace> subjects: - kind: User name: <username> roleRef: kind: Role name: custom-resource-edit apiGroup: rbac.authorization.k8s.io
- Save your object definition to a YAML file.
Create the object by running the following command:
$ oc create -f <filename>.yaml
10.2.4. Granting user access to extension resources by using aggregated cluster roles
As a cluster administrator, you can configure role-based access control (RBAC) policies to grant user access to extension resources by using aggregated cluster roles.
To automatically extend existing default cluster roles, you can add aggregation labels by adding one or more of the following labels to a ClusterRole
object:
Aggregation labels in a ClusterRole
object
# .. metadata: labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" # ..
This allows users who already have view
, edit
, or admin
roles to interact with the custom resource specified by the ClusterRole
object without requiring additional role or cluster role bindings to specific users or groups.
Prerequisites
- A cluster extension has been installed on your cluster.
- You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension".
Procedure
Create an object definition for a cluster role that specifies the API groups and resources provided by the cluster extension and add an aggregation label to extend one or more existing default cluster roles:
Example
ClusterRole
object with an aggregation labelapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-custom-resource-aggregated labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rules: - apiGroups: - <cluster_extension_api_group> resources: - <cluster_extension_custom_resource> verbs: - get - list - watch
You can create similar
ClusterRole
objects foredit
andadmin
with appropriate verbs, such ascreate
,update
, anddelete
. By using aggregation labels, the permissions for the custom resources are added to the default roles.- Save your object definition to a YAML file.
Create the object by running the following command:
$ oc create -f <filename>.yaml
Additional resources
- Aggregated ClusterRoles (Kubernetes documentation)
10.3. Update paths
When determining update paths, also known as upgrade edges or upgrade constraints, for an installed cluster extension, Operator Lifecycle Manager (OLM) v1 supports OLM (Classic) semantics starting in OpenShift Container Platform 4.16. This support follows the behavior from OLM (Classic), including replaces
, skips
, and skipRange
directives, with a few noted differences.
By supporting OLM (Classic) semantics, OLM v1 accurately reflects the update graph from catalogs.
Differences from original OLM (Classic) implementation
If there are multiple possible successors, OLM v1 behavior differs in the following ways:
- In OLM (Classic), the successor closest to the channel head is chosen.
- In OLM v1, the successor with the highest semantic version (semver) is chosen.
Consider the following set of file-based catalog (FBC) channel entries:
# ... - name: example.v3.0.0 skips: ["example.v2.0.0"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0
If
1.0.0
is installed, OLM v1 behavior differs in the following ways:-
OLM (Classic) will not detect an update path to
v2.0.0
becausev2.0.0
is skipped and not on thereplaces
chain. -
OLM v1 will detect the update path because OLM v1 does not have a concept of a
replaces
chain. OLM v1 finds all entries that have areplace
,skip
, orskipRange
value that covers the currently installed version.
-
OLM (Classic) will not detect an update path to
Additional resources
10.3.1. Support for version ranges
In Operator Lifecycle Manager (OLM) v1, you can specify a version range by using a comparison string in an Operator or extension’s custom resource (CR). If you specify a version range in the CR, OLM v1 installs or updates to the latest version of the Operator that can be resolved within the version range.
Resolved version workflow
- The resolved version is the latest version of the Operator that satisfies the constraints of the Operator and the environment.
- An Operator update within the specified range is automatically installed if it is resolved successfully.
- An update is not installed if it is outside of the specified range or if it cannot be resolved successfully.
10.3.2. Version comparison strings
You can define a version range by adding a comparison string to the spec.version
field in an Operator or extension’s custom resource (CR). A comparison string is a list of space- or comma-separated values and one or more comparison operators enclosed in double quotation marks ("
). You can add another comparison string by including an OR
, or double vertical bar (||
), comparison operator between the strings.
Comparison operator | Definition |
---|---|
| Equal to |
| Not equal to |
| Greater than |
| Less than |
| Greater than or equal to |
| Less than or equal to |
You can specify a version range in an Operator or extension’s CR by using a range comparison similar to the following example:
Example version range comparison
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> serviceAccount: name: <service_account_installer_name> source: sourceType: Catalog catalog: packageName: <package_name> version: ">=1.11, <1.13"
You can use wildcard characters in all types of comparison strings. OLM v1 accepts x
, X
, and asterisks (*
) as wildcard characters. When you use a wildcard character with the equal sign (=
) comparison operator, you define a comparison at the patch or minor version level.
Wildcard comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
You can make patch release comparisons by using the tilde (~
) comparison operator. Patch release comparisons specify a minor version up to the next major version.
Patch release comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
|
|
You can use the caret (^
) comparison operator to make a comparison for a major release. If you make a major release comparison before the first stable release is published, the minor versions define the API’s level of stability. In the semantic versioning (semver) specification, the first stable release is published as the 1.0.0
version.
Major release comparison | Matching string |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10.3.3. Example custom resources (CRs) that specify a target version
In Operator Lifecycle Manager (OLM) v1, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR).
You can define a target version by specifying any of the following fields:
- Channel
- Version number
- Version range
If you specify a channel in the CR, OLM v1 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM v1 automatically updates to the latest release that can be resolved from the channel.
Example CR with a specified channel
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
channels:
- latest 1
- 1
- Optional: Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Specify the value of the
channels
parameter as an array.
If you specify the Operator or extension’s target version in the CR, OLM v1 installs the specified version. When the target version is specified in the CR, OLM v1 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator’s CR. Specifying an Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
version: "1.11.1" 1
- 1
- Optional: Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version.
If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM v1 installs the latest version of an Operator or extension that can be resolved by the Operator Controller.
Example CR with a version range specified
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: <clusterextension_name>
spec:
namespace: <installed_namespace>
serviceAccount:
name: <service_account_installer_name>
source:
sourceType: Catalog
catalog:
packageName: <package_name>
version: ">1.11.1" 1
- 1
- Optional: Specifies that the desired version range is greater than version
1.11.1
. For more information, see "Support for version ranges".
After you create or update a CR, apply the configuration file by running the following command:
Command syntax
$ oc apply -f <extension_name>.yaml
10.3.4. Forcing an update or rollback
OLM v1 does not support automatic updates to the next major version or rollbacks to an earlier version. If you want to perform a major version update or rollback, you must verify and force the update manually.
You must verify the consequences of forcing a manual update or rollback. Failure to verify a forced update or rollback might have catastrophic consequences such as data loss.
Prerequisites
- You have a catalog installed.
- You have an Operator or extension installed.
- You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension you want to install. For more information, see Creating a service account.
Procedure
Edit the custom resource (CR) of your Operator or extension as shown in the following example:
Example CR
apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <clusterextension_name> spec: namespace: <installed_namespace> 1 serviceAccount: name: <service_account_installer_name> 2 source: sourceType: Catalog catalog: packageName: <package_name> channels: - <channel_name> 3 version: <version_or_version_range> 4 upgradeConstraintPolicy: SelfCertified 5
- 1
- Specifies the namespace where you want the bundle installed, such as
pipelines
ormy-extension
. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. - 2
- Specifies the name of the service account you created to install, update, and manage your extension.
- 3
- Optional: Specifies channel names as an array, such as
pipelines-1.14
orlatest
. - 4
- Optional: Specifies the version or version range, such as
1.14.0
,1.14.x
, or>=1.16
, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". - 5
- Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to
SelfCertified
. If unspecified, the default setting isCatalogProvided
. TheCatalogProvided
setting only updates if the new version satisfies the upgrade constraints set by the package author.
Apply the changes to your Operator or extensions CR by running the following command:
$ oc apply -f <extension_name>.yaml
Additional resources
10.3.5. Compatibility with OpenShift Container Platform versions
Before cluster administrators can update their OpenShift Container Platform cluster to its next minor version, they must ensure that all installed Operators are updated to a bundle version that is compatible with the cluster’s next minor version (4.y+1).
For example, Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If an extension is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is updated to the Kubernetes version where the API has been removed.
If an Operator author knows that a specific bundle version is not supported and will not work correctly, for any reason, on OpenShift Container Platform later than a certain cluster minor version, they can configure the maximum version of OpenShift Container Platform that their Operator is compatible with.
In the Operator project’s cluster service version (CSV), authors can set the olm.maxOpenShiftVersion
annotation to prevent administrators from updating the cluster before updating the installed Operator to a compatible version.
Example CSV with olm.maxOpenShiftVersion
annotation
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
"olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]' 1
- 1
- Specifies the latest minor version of OpenShift Container Platform (4.y) that an Operator is compatible with. For example, setting
value
to4.18
prevents cluster updates to minor versions later than 4.18 when this bundle is installed on a cluster.If the
olm.maxOpenShiftVersion
field is omitted, cluster updates are not blocked by this Operator.
When determining a cluster’s next minor version (4.y+1), OLM v1 only considers major and minor versions (x and y) for comparisons. It ignores any z-stream versions (4.y.z), also known as patch releases, or pre-release versions.
For example, if the cluster’s current version is 4.18.0
, the next minor version is 4.19
. If the current version is 4.18.0-rc1
, the next minor version is still 4.19
.
Additional resources
- Deprecated API Migration Guide (Kubernetes documentation)
10.3.5.1. Cluster updates blocked by olm cluster Operator
If an installed Operator’s olm.maxOpenShiftVersion
field is set and a cluster administrator attempts to update their cluster to a version that the Operator does not provide a valid update path for, the cluster update fails and the Upgradeable
status for the olm
cluster Operator is set to False
.
To resolve the issue, the cluster administrator must either update the installed Operator to a version with a valid update path, if one is available, or they must uninstall the Operator. Then, they can attempt the cluster update again.
10.4. Custom resource definition (CRD) upgrade safety
When you update a custom resource definition (CRD) that is provided by a cluster extension, Operator Lifecycle Manager (OLM) v1 runs a CRD upgrade safety preflight check to ensure backwards compatibility with previous versions of that CRD. The CRD update must pass the validation checks before the change is allowed to progress on a cluster.
Additional resources
10.4.1. Prohibited CRD upgrade changes
The following changes to an existing custom resource definition (CRD) are caught by the CRD upgrade safety preflight check and prevent the upgrade:
- A new required field is added to an existing version of the CRD
- An existing field is removed from an existing version of the CRD
- An existing field type is changed in an existing version of the CRD
- A new default value is added to a field that did not previously have a default value
- The default value of a field is changed
- An existing default value of a field is removed
- New enum restrictions are added to an existing field which did not previously have enum restrictions
- Existing enum values from an existing field are removed
- The minimum value of an existing field is increased in an existing version
- The maximum value of an existing field is decreased in an existing version
- Minimum or maximum field constraints are added to a field that did not previously have constraints
The rules for changes to minimum and maximum values apply to minimum
, minLength
, minProperties
, minItems
, maximum
, maxLength
, maxProperties
, and maxItems
constraints.
The following changes to an existing CRD are reported by the CRD upgrade safety preflight check and prevent the upgrade, though the operations are technically handled by the Kubernetes API server:
-
The scope changes from
Cluster
toNamespace
or fromNamespace
toCluster
- An existing stored version of the CRD is removed
If the CRD upgrade safety preflight check encounters one of the prohibited upgrade changes, it logs an error for each prohibited change detected in the CRD upgrade.
In cases where a change to the CRD does not fall into one of the prohibited change categories, but is also unable to be properly detected as allowed, the CRD upgrade safety preflight check will prevent the upgrade and log an error for an "unknown change".
10.4.2. Allowed CRD upgrade changes
The following changes to an existing custom resource definition (CRD) are safe for backwards compatibility and will not cause the CRD upgrade safety preflight check to halt the upgrade:
- Adding new enum values to the list of allowed enum values in a field
- An existing required field is changed to optional in an existing version
- The minimum value of an existing field is decreased in an existing version
- The maximum value of an existing field is increased in an existing version
- A new version of the CRD is added with no modifications to existing versions
10.4.3. Disabling CRD upgrade safety preflight check
The custom resource definition (CRD) upgrade safety preflight check can be disabled by adding the preflight.crdUpgradeSafety.disabled
field with a value of true
to the ClusterExtension
object that provides the CRD.
Disabling the CRD upgrade safety preflight check could break backwards compatibility with stored versions of the CRD and cause other unintended consequences on the cluster.
You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, all field validators are disabled.
The following checks are handled by the Kubernetes API server:
-
The scope changes from
Cluster
toNamespace
or fromNamespace
toCluster
- An existing stored version of the CRD is removed
After disabling the CRD upgrade safety preflight check via Operator Lifecycle Manager (OLM) v1, these two operations are still prevented by Kubernetes.
Prerequisites
- You have a cluster extension installed.
Procedure
Edit the
ClusterExtension
object of the CRD:$ oc edit clusterextension <clusterextension_name>
Set the
preflight.crdUpgradeSafety.disabled
field totrue
:Example 10.19. Example
ClusterExtension
objectapiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: clusterextension-sample spec: installNamespace: default packageName: argocd-operator version: 0.6.0 preflight: crdUpgradeSafety: disabled: true 1
- 1
- Set to
true
.
10.4.4. Examples of unsafe CRD changes
The following examples demonstrate specific changes to sections of an example custom resource definition (CRD) that would be caught by the CRD upgrade safety preflight check.
For the following examples, consider a CRD object in the following starting state:
Example 10.20. Example CRD object
apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.13.0 name: example.test.example.com spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object served: true storage: true subresources: status: {}
10.4.4.1. Scope change
In the following custom resource definition (CRD) example, the scope
field is changed from Namespaced
to Cluster
:
Example 10.21. Example scope change in a CRD
spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Cluster versions: - name: v1alpha1
Example 10.22. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoScopeChange" validation failed: scope changed from "Namespaced" to "Cluster"
10.4.4.2. Removal of a stored version
In the following custom resource definition (CRD) example, the existing stored version, v1alpha1
, is removed:
Example 10.23. Example removal of a stored version in a CRD
versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object
Example 10.24. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoStoredVersionRemoved" validation failed: stored version "v1alpha1" removed
10.4.4.3. Removal of an existing field
In the following custom resource definition (CRD) example, the pollInterval
property field is removed from the v1alpha1
schema:
Example 10.25. Example removal of an existing field in a CRD
versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object type: object
Example 10.26. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoExistingFieldRemoved" validation failed: crd/test.example.com version/v1alpha1 field/^.spec.pollInterval may not be removed
10.4.4.4. Addition of a required field
In the following custom resource definition (CRD) example, the pollInterval
property has been changed to a required field:
Example 10.27. Example addition of a required field in a CRD
versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object required: - pollInterval
Example 10.28. Example error output
validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "ChangeValidator" validation failed: version "v1alpha1", field "^": new required fields added: [pollInterval]