Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Administrator tasks
4.1. Adding Operators to a cluster Link kopierenLink in die Zwischenablage kopiert!
Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OpenShift Container Platform cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.1.1. About Operator installation with OperatorHub Link kopierenLink in die Zwischenablage kopiert!
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose a specific namespace in which to install the Operator.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
4.1.2. Installing from OperatorHub using the web console Link kopierenLink in die Zwischenablage kopiert!
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin - Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
to find the Advanced Cluster Management for Kubernetes Operator.advancedYou can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
-
All namespaces on the cluster (default) installs the Operator in the default namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
openshift-operators - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
- Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
- Select an Update Channel (if more than one is available).
- Select Automatic or Manual approval strategy, as described earlier.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators
Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. NoteFor the All namespaces… installation mode, the status resolves to InstallSucceeded in the
namespace, but the status is Copied if you check in other namespaces.openshift-operatorsIf it does not:
-
Check the logs in any pods in the project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads
openshift-operatorsPods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
4.1.3. Installing from OperatorHub by using the CLI Link kopierenLink in die Zwischenablage kopiert!
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the
oc
Subscription
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
You have installed the OpenShift CLI ().
oc
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplaceExample output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplaceAn Operator group, defined by an
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.OperatorGroupThe namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
orAllNamespacesmode. If the Operator you intend to install uses theSingleNamespacemode, theAllNamespacesnamespace already has the appropriateopenshift-operatorsOperator group in place.global-operatorsHowever, if the Operator uses the
mode and you do not already have an appropriate Operator group in place, you must create one.SingleNamespaceNote-
The web console version of this procedure handles the creation of the and
OperatorGroupobjects automatically behind the scenes for you when choosingSubscriptionmode.SingleNamespace - You can only have one Operator group per namespace. For more information, see "Operator groups".
Create an
object YAML file, for exampleOperatorGroup:operatorgroup.yamlExample
OperatorGroupobjectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create the
object:OperatorGroup$ oc apply -f operatorgroup.yaml
-
The web console version of this procedure handles the creation of the
Create a
object YAML file to subscribe a namespace to an Operator, for exampleSubscription:sub.yamlExample
SubscriptionobjectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators1 spec: channel: <channel_name>2 name: <operator_name>3 source: redhat-operators4 sourceNamespace: openshift-marketplace5 config: env:6 - name: ARGS value: "-v=10" envFrom:7 - secretRef: name: license-secret volumes:8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts:9 - mountPath: <directory_name> name: <volume_name> tolerations:10 - operator: "Exists" resources:11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector:12 foo: bar- 1
- For default
AllNamespacesinstall mode usage, specify theopenshift-operatorsnamespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace forSingleNamespaceinstall mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplacefor the default OperatorHub catalog sources. - 6
- The
envparameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFromparameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumesparameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMountsparameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMountreferences avolumethat does not exist, OLM fails to deploy the Operator. - 10
- The
tolerationsparameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resourcesparameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelectorparameter defines aNodeSelectorfor the pod created by OLM.
Create the
object:Subscription$ oc apply -f sub.yamlAt this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
4.1.4. Installing a specific version of an Operator Link kopierenLink in die Zwischenablage kopiert!
You can install a specific version of an Operator by setting the cluster service version (CSV) in a
Subscription
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions
-
OpenShift CLI () installed
oc
Procedure
Create a
object YAML file that subscribes a namespace to an Operator with a specific version by setting theSubscriptionfield. Set thestartingCSVfield toinstallPlanApprovalto prevent the Operator from automatically upgrading if a later version exists in the catalog.ManualFor example, the following
file can be used to install the Red Hat Quay Operator specifically to version 3.4.0:sub.yamlSubscription with a specific starting Operator version
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.02 - 1
- Set the approval strategy to
Manualin case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
Create the
object:Subscription$ oc apply -f sub.yaml- Manually approve the pending install plan to complete the Operator installation.
4.1.5. Preparing for multiple instances of an Operator for multitenant clusters Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".
In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.
Prerequisites
All instances of the Operator you want to install must be the same version across a given cluster.
ImportantFor more information on this and other limitations, see "Operators in multitenant clusters".
Procedure
Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is
, you might create ateam1namespace:team1-operatorDefine a
resource and save the YAML file, for example,Namespace:team1-operator.yamlapiVersion: v1 kind: Namespace metadata: name: team1-operatorCreate the namespace by running the following command:
$ oc create -f team1-operator.yaml
Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the
list:spec.targetNamespacesDefine an
resource and save the YAML file, for example,OperatorGroup:team1-operatorgroup.yamlapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team11 - 1
- Define only the tenant’s namespace in the
spec.targetNamespaceslist.
Create the Operator group by running the following command:
$ oc create -f team1-operatorgroup.yaml
Next steps
Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, "Installing from OperatorHub using the web console".
NoteAfter completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.
4.1.6. Installing global Operators in custom namespaces Link kopierenLink in die Zwischenablage kopiert!
When installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default
openshift-operators
As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.
Procedure
Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:
Define a
resource and save the YAML file, for example,Namespace:global-operators.yamlapiVersion: v1 kind: Namespace metadata: name: global-operatorsCreate the namespace by running the following command:
$ oc create -f global-operators.yaml
Create a custom global Operator group, which is an Operator group that watches all namespaces:
Define an
resource and save the YAML file, for example,OperatorGroup. Omit both theglobal-operatorgroup.yamlandspec.selectorfields to make it a global Operator group, which selects all namespaces:spec.targetNamespacesapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operatorsNoteThe
of a created global Operator group contains the empty string (status.namespaces), which signals to a consuming Operator that it should watch all namespaces.""Create the Operator group by running the following command:
$ oc create -f global-operatorgroup.yaml
Next steps
Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, the install task can only be performed with the OpenShift CLI (
). For a detailed installation procedure, see "Installing from OperatorHub by using the CLI".ocNoteWhen you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.
4.1.7. Pod placement of Operator workloads Link kopierenLink in die Zwischenablage kopiert!
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
-
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as , that identifies the node or nodes. Otherwise, add a label, such as
node-role.kubernetes.io/app, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.myoperator -
If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.
myoperator:NoSchedule - Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
- For Operator pods
-
Administrators can create a
Subscriptionobject in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes. - For Operand pods
- Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
4.1.8. Controlling where an Operator is installed Link kopierenLink in die Zwischenablage kopiert!
By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.
The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:
-
If an Operator requires a particular platform, such as or
amd64arm64 - If an Operator requires a particular operating system, such as Linux or Windows
- If you want Operators that work together scheduled on the same host or on hosts located on the same rack
- If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues
You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s
Subscription
The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:
Node affinity example that places the Operator pod on a specific node
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-163-94.us-west-2.compute.internal
#...
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node named
ip-10-0-163-94.us-west-2.compute.internal.
Node affinity example that places the Operator pod on a node with a specific platform
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- key: kubernetes.io/os
operator: In
values:
- linux
#...
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node with the
kubernetes.io/arch=arm64andkubernetes.io/os=linuxlabels.
Pod affinity example that places the Operator pod on one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test
topologyKey: kubernetes.io/hostname
#...
- 1
- A pod affinity that places the Operator’s pod on a node that has pods with the
app=testlabel.
Pod anti-affinity example that prevents the Operator pod from one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cpu
operator: In
values:
- high
topologyKey: kubernetes.io/hostname
#...
- 1
- A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the
cpu=highlabel.
Procedure
To control the placement of an Operator pod, complete the following steps:
- Install the Operator as usual.
- If needed, ensure that your nodes are labeled to properly respond to the affinity.
Edit the Operator
object to add an affinity:SubscriptionapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity:1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #...- 1
- Add a
nodeAffinity,podAffinity, orpodAntiAffinity. See the Additional resources section that follows for information about creating the affinity.
Verification
To ensure that the pod is deployed on the specific node, run the following command:
$ oc get pods -o wideExample output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>
4.2. Updating installed Operators Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.2.1. Preparing for an Operator update Link kopierenLink in die Zwischenablage kopiert!
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.
The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (
1.2
1.3
stable
fast
You cannot change installed Operators to a channel that is older than the current channel.
Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:
You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included.
4.2.2. Changing the update channel for an Operator Link kopierenLink in die Zwischenablage kopiert!
You can change the update channel for an Operator by using the OpenShift Container Platform web console.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
Installed Operators. - Click the name of the Operator you want to change the update channel for.
- Click the Subscription tab.
- Click the name of the update channel under Channel.
- Click the newer update channel that you want to change to, then click Save.
For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date. For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.
4.2.3. Manually approving a pending Operator update Link kopierenLink in die Zwischenablage kopiert!
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any update requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
-
Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
4.3. Deleting Operators from a cluster Link kopierenLink in die Zwischenablage kopiert!
The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator. For more information, see Reinstalling Operators after failed uninstallation.
4.3.1. Deleting Operators from a cluster using the web console Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with permissions.
cluster-admin
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.2. Deleting Operators from a cluster using the CLI Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin -
command installed on workstation.
oc
Procedure
Ensure the latest version of the subscribed operator (for example,
) is identified in theserverless-operatorfield.currentCSV$ oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSVExample output
currentCSV: serverless-operator.v1.28.0Delete the subscription (for example,
):serverless-operator$ oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverlessExample output
subscription.operators.coreos.com "serverless-operator" deletedDelete the CSV for the Operator in the target namespace using the
value from the previous step:currentCSV$ oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverlessExample output
clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted
4.3.3. Refreshing failing subscriptions Link kopierenLink in die Zwischenablage kopiert!
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the
openshift-marketplace
Example output
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
andSubscriptionobjects from the namespace where the Operator is installed:ClusterServiceVersion$ oc get sub,csv -n <namespace>Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 SucceededDelete the subscription:
$ oc delete subscription <subscription_name> -n <namespace>Delete the cluster service version:
$ oc delete csv <csv_name> -n <namespace>Get the names of any failing jobs and related config maps in the
namespace:openshift-marketplace$ oc get job,configmap -n openshift-marketplaceExample output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30sDelete the job:
$ oc delete job <job_name> -n openshift-marketplaceThis ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
$ oc delete configmap <configmap_name> -n openshift-marketplace- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
4.4. Configuring Operator Lifecycle Manager features Link kopierenLink in die Zwischenablage kopiert!
The Operator Lifecycle Manager (OLM) controller is configured by an
OLMConfig
cluster
This document outlines the features currently supported by OLM that are configured by the
OLMConfig
4.4.1. Disabling copied CSVs Link kopierenLink in die Zwischenablage kopiert!
When an Operator is installed by Operator Lifecycle Manager (OLM), a simplified copy of its cluster service version (CSV) is created in every namespace that the Operator is configured to watch. These CSVs are known as copied CSVs and communicate to users which controllers are actively reconciling resource events in a given namespace.
When Operators are configured to use the
AllNamespaces
To support these larger clusters, cluster administrators can disable copied CSVs for Operators installed with the
AllNamespaces
If you disable copied CSVs, a user’s ability to discover Operators in the OperatorHub and CLI is limited to Operators installed directly in the user’s namespace.
If an Operator is configured to reconcile events in the user’s namespace but is installed in a different namespace, the user cannot view the Operator in the OperatorHub or CLI. Operators affected by this limitation are still available and continue to reconcile events in the user’s namespace.
This behavior occurs for the following reasons:
- Copied CSVs identify the Operators available for a given namespace.
- Role-based access control (RBAC) scopes the user’s ability to view and discover Operators in the OperatorHub and CLI.
Procedure
Edit the
object namedOLMConfigand set theclusterfield tospec.features.disableCopiedCSVs:true$ oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true1 EOF- 1
- Disabled copied CSVs for
AllNamespacesinstall mode Operators
Verification
When copied CSVs are disabled, OLM captures this information in an event in the Operator’s namespace:
$ oc get eventsExample output
LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0When the
field is missing or set tospec.features.disableCopiedCSVs, OLM recreates the copied CSVs for all Operators installed with thefalsemode and deletes the previously mentioned events.AllNamespaces
Additional resources
4.5. Configuring proxy support in Operator Lifecycle Manager Link kopierenLink in die Zwischenablage kopiert!
If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.
4.5.1. Overriding proxy settings of an Operator Link kopierenLink in die Zwischenablage kopiert!
If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator.
Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. - Select the Operator and click Install.
On the Install Operator page, modify the
object to include one or more of the following environment variables in theSubscriptionsection:spec-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
For example:
Subscriptionobject with proxy setting overridesapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwideNoteThese environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.
OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.
-
- Click Install to make the Operator available to the selected namespaces.
After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:
$ oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2Example output
- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ...
4.5.2. Injecting a custom CA certificate Link kopierenLink in die Zwischenablage kopiert!
When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin - Custom CA certificate added to the cluster using a config map.
- Desired Operator installed and running on OLM.
Procedure
Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:
apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca1 labels: config.openshift.io/inject-trusted-cabundle: "true"2 After creating this config map, it is immediately populated with the certificate contents of the merged bundle.
Update your the
object to include aSubscriptionsection that mounts thespec.configconfig map as a volume to each container within a pod that requires a custom CA:trusted-caapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config:1 selector: matchLabels: <labels_for_pods>2 volumes:3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt4 path: tls-ca-bundle.pem5 volumeMounts:6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: trueNoteDeployments of an Operator can fail to validate the authority and display a
error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set thex509 certificate signed by unknown authorityasmountPathfor trusted-ca by using the subscription of an Operator./etc/ssl/certs
4.6. Viewing Operator status Link kopierenLink in die Zwischenablage kopiert!
Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.
4.6.1. Operator subscription condition types Link kopierenLink in die Zwischenablage kopiert!
Subscriptions can report the following condition types:
| Condition | Description |
|---|---|
|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
|
| An install plan for a subscription is missing. |
|
| An install plan for a subscription is pending installation. |
|
| An install plan for a subscription has failed. |
|
| The dependency resolution for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a
Subscription
Subscription
4.6.2. Viewing Operator subscription status by using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
List Operator subscriptions:
$ oc get subs -n <operator_namespace>Use the
command to inspect aoc describeresource:Subscription$ oc describe sub <subscription_name> -n <operator_namespace>In the command output, find the
section for the status of Operator subscription condition types. In the following example, theConditionscondition type has a status ofCatalogSourcesUnhealthybecause all available catalog sources are healthy:falseExample output
Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ...
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a
Subscription
Subscription
4.6.3. Viewing Operator catalog source status by using the CLI Link kopierenLink in die Zwischenablage kopiert!
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
List the catalog sources in a namespace. For example, you can check the
namespace, which is used for cluster-wide catalog sources:openshift-marketplace$ oc get catalogsources -n openshift-marketplaceExample output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55mUse the
command to get more details and status about a catalog source:oc describe$ oc describe catalogsource example-catalog -n openshift-marketplaceExample output
Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ...In the preceding example output, the last observed state is
. This state indicates that there is a problem establishing a connection for the catalog source.TRANSIENT_FAILUREList the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplaceExample output
NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36mWhen a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
pod isexample-catalog-bwt8z. This status indicates that there is an issue pulling the catalog source’s index image.ImagePullBackOffUse the
command to inspect a pod for more detailed information:oc describe$ oc describe pod example-catalog-bwt8z -n openshift-marketplaceExample output
Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePullIn the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
4.7. Managing Operator conditions Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM).
4.7.1. Overriding Operator conditions Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the
Spec.Overrides
Spec.Conditions
By default, the
Spec.Overrides
OperatorCondition
Spec.Conditions
For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition
type
status
Spec.Overrides
OperatorCondition
Prerequisites
-
An Operator with an object, installed using OLM.
OperatorCondition
Procedure
Edit the
object for the Operator:OperatorCondition$ oc edit operatorcondition <name>Add a
array to the object:Spec.OverridesExample Operator condition override
apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z"- 1
- Allows the cluster administrator to change the upgrade readiness to
True.
4.7.2. Updating your Operator to use Operator conditions Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) automatically creates an
OperatorCondition
ClusterServiceVersion
OperatorCondition
An Operator author can develop their Operator to use the
operator-lib
4.7.2.1. Setting defaults Link kopierenLink in die Zwischenablage kopiert!
In an effort to remain backwards compatible, OLM treats the absence of an
OperatorCondition
true
4.8. Allowing non-cluster administrators to install Operators Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can use Operator groups to allow regular users to install Operators.
4.8.1. Understanding Operator installation policy Link kopierenLink in die Zwischenablage kopiert!
Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with
cluster-admin
To ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM, Cluster administrators can manually audit Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts.
Cluster administrators can associate an Operator group with a service account that has a set of privileges granted to it. The service account sets policy on Operators to ensure they only run within predetermined boundaries by using role-based access control (RBAC) rules. As a result, the Operator is unable to do anything that is not explicitly permitted by those rules.
By employing Operator groups, users with enough privileges can install Operators with a limited scope. As a result, more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators.
Role-based access control (RBAC) for
Subscription
edit
admin
OperatorGroup
Keep the following points in mind when associating an Operator group with a service account:
-
The and
APIServiceresources are always created by OLM using theCustomResourceDefinitionrole. A service account associated with an Operator group should never be granted privileges to write these resources.cluster-admin - Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors so the cluster administrator can troubleshoot and resolve the issue.
4.8.1.1. Installation scenarios Link kopierenLink in die Zwischenablage kopiert!
When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios:
- A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account.
- A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted.
- For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted.
- A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator.
- A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator.
- A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted.
4.8.1.2. Installation workflow Link kopierenLink in die Zwischenablage kopiert!
When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow:
-
The given object is picked up by OLM.
Subscription - OLM fetches the Operator group tied to this subscription.
- OLM determines that the Operator group has a service account specified.
- OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group.
- OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account.
4.8.2. Scoping Operator installations Link kopierenLink in die Zwischenablage kopiert!
To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group.
Using this example, a cluster administrator can confine a set of Operators to a designated namespace.
Procedure
Create a new namespace:
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOFAllocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s).
$ cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOFThe following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions:
$ cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOFCreate an
object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it.OperatorGroupIn addition, Operator groups allow a user to specify a service account. Specify the service account created in the previous step:
$ cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOFAny Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified.
WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create a
object in the designated namespace to install an Operator:Subscription$ cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name>1 sourceNamespace: <catalog_source_namespace>2 EOFAny Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors.
4.8.2.1. Fine-grained permissions Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed:
-
ClusterServiceVersion -
Subscription -
Secret -
ServiceAccount -
Service -
and
ClusterRoleClusterRoleBinding -
and
RoleRoleBinding
To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account:
The following role is a generic example and additional rules might be required based on the specific Operator.
kind: Role
rules:
- apiGroups: ["operators.coreos.com"]
resources: ["subscriptions", "clusterserviceversions"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "serviceaccounts"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
In addition, if any Operator specifies a pull secret, the following permissions must also be added:
kind: ClusterRole
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
kind: Role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "update", "patch"]
- 1
- Required to get the secret from the OLM namespace.
4.8.3. Operator catalog access control Link kopierenLink in die Zwischenablage kopiert!
When an Operator catalog is created in the global catalog namespace
openshift-marketplace
On clusters where non-cluster administrator users have been delegated Operator installation privileges, cluster administrators might want to further control or restrict the set of Operators those users are allowed to install. This can be achieved with the following actions:
- Disable all of the default global catalogs.
- Enable custom, curated catalogs in the same namespace where the relevant Operator groups have been preinstalled.
4.8.4. Troubleshooting permission failures Link kopierenLink in die Zwischenablage kopiert!
If an Operator installation fails due to lack of permissions, identify the errors using the following procedure.
Procedure
Review the
object. Its status has an object referenceSubscriptionthat points to theinstallPlanRefobject that attempted to create the necessaryInstallPlanobject(s) for the Operator:[Cluster]Role[Binding]apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: "117359" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23Check the status of the
object for any errors:InstallPlanapiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: "2019-07-26T21:13:10Z" lastUpdateTime: "2019-07-26T21:13:10Z" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope' reason: InstallComponentFailed status: "False" type: Installed phase: FailedThe error message tells you:
-
The type of resource it failed to create, including the API group of the resource. In this case, it was in the
clusterrolesgroup.rbac.authorization.k8s.io - The name of the resource.
-
The type of error: tells you that the user does not have enough permission to do the operation.
is forbidden - The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group.
The scope of the operation:
or not.cluster scopeThe user can add the missing permission to the service account and then iterate.
NoteOperator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try.
-
The type of resource it failed to create, including the API group of the resource. In this case, it was
4.9. Managing custom catalogs Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
4.9.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
Install the
opmCLI.
4.9.2. File-based catalogs Link kopierenLink in die Zwischenablage kopiert!
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The
opm
Many of the
opm
opm index prune
4.9.2.1. Creating a file-based catalog image Link kopierenLink in die Zwischenablage kopiert!
You can use the
opm
Prerequisites
-
opm -
version 1.9.3+
podman - A bundle image built and pushed to a registry that supports Docker v2-2
Procedure
Initialize the catalog:
Create a directory for the catalog by running the following command:
$ mkdir <catalog_dir>Generate a Dockerfile that can build a catalog image by running the
command:opm generate dockerfile$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.121 - 1
- Specify the official Red Hat base image by using the
-iflag, otherwise the Dockerfile uses the default upstream image.
The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
.1 ├── <catalog_dir>2 └── <catalog_dir>.Dockerfile3 Populate the catalog with the package definition for your Operator by running the
command:opm init$ opm init <operator_name> \1 --default-channel=preview \2 --description=./README.md \3 --icon=./operator-icon.svg \4 --output yaml \5 > <catalog_dir>/index.yaml6 This command generates an
declarative config blob in the specified catalog configuration file.olm.package
Add a bundle to the catalog by running the
command:opm render$ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \1 --output=yaml \ >> <catalog_dir>/index.yaml2 NoteChannels must contain at least one bundle.
Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your
file:<catalog_dir>/index.yamlExample channel entry
--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.01 - 1
- Ensure that you include the period (
.) after<operator_name>but before thevin the version. Otherwise, the entry fails to pass theopm validatecommand.
Validate the file-based catalog:
Run the
command against the catalog directory:opm validate$ opm validate <catalog_dir>Check that the error code is
:0$ echo $?Example output
0
Build the catalog image by running the
command:podman build$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>Push the catalog image to a registry:
If required, authenticate with your target registry by running the
command:podman login$ podman login <registry>Push the catalog image by running the
command:podman push$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
4.9.2.2. Updating or filtering a file-based catalog image Link kopierenLink in die Zwischenablage kopiert!
You can use the
opm
Alternatively, if you already have a catalog image on a mirror registry, you can use the oc-mirror CLI plugin to automatically prune any removed images from an updated source version of that catalog image while mirroring it to the target registry.
For more information about the oc-mirror plugin and this use case, see the "Keeping your mirror registry content updated" section, and specifically the "Pruning images" subsection, of "Mirroring images for a disconnected installation using the oc-mirror plugin".
Prerequisites
-
CLI.
opm -
version 1.9.3+.
podman - A file-based catalog image.
A catalog directory structure recently initialized on your workstation related to this catalog.
If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.
Procedure
Extract the contents of the catalog image in YAML format to an
file in your catalog directory:index.yaml$ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \ -o yaml > <catalog_dir>/index.yamlNoteAlternatively, you can use the
flag to output in JSON format.-o jsonModify the contents of the resulting
file to your specifications by updating, adding, or removing one or more Operator package entries.index.yamlImportantAfter a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.
For example, if you wanted to remove an Operator package, the following example lists a set of
,olm.package, andolm.channelblobs which must be deleted to remove the package from the catalog:olm.bundleExample 4.1. Example removed entries
--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ----
Save your changes to the file.
index.yaml Validate the catalog:
$ opm validate <catalog_dir>Rebuild the catalog:
$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>Push the updated catalog image to a registry:
$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
Verification
-
In the web console, navigate to the OperatorHub configuration resource in the Administration
Cluster Settings Configuration page. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.
For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.
-
After the catalog source is in a READY state, navigate to the Operators
OperatorHub page and check that the changes you made are reflected in the list of Operators.
4.9.3. SQLite-based catalogs Link kopierenLink in die Zwischenablage kopiert!
The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.9.3.1. Creating a SQLite-based index image Link kopierenLink in die Zwischenablage kopiert!
You can create an index image based on the SQLite database format by using the
opm
Prerequisites
-
opm -
version 1.9.3+
podman - A bundle image built and pushed to a registry that supports Docker v2-2
Procedure
Start a new index:
$ opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1 --tag <registry>/<namespace>/<index_image_name>:<tag> \2 [--binary-image <registry_base_image>]3 Push the index image to a registry.
If required, authenticate with your target registry:
$ podman login <registry>Push the index image:
$ podman push <registry>/<namespace>/<index_image_name>:<tag>
4.9.3.2. Updating a SQLite-based index image Link kopierenLink in die Zwischenablage kopiert!
After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.
You can update an existing index image using the
opm index add
Prerequisites
-
opm -
version 1.9.3+
podman - An index image built and pushed to a registry.
- An existing catalog source referencing the index image.
Procedure
Update the existing index by adding bundle images:
$ opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3 --pull-tool podman4 - 1
- The
--bundlesflag specifies a comma-separated list of additional bundle images to add to the index. - 2
- The
--from-indexflag specifies the previously pushed index. - 3
- The
--tagflag specifies the image tag to apply to the updated index image. - 4
- The
--pull-toolflag specifies the tool used to pull container images.
where:
<registry>-
Specifies the hostname of the registry, such as
quay.ioormirror.example.com. <namespace>-
Specifies the namespace of the registry, such as
ocs-devorabc. <new_bundle_image>-
Specifies the new bundle image to add to the registry, such as
ocs-operator. <digest>-
Specifies the SHA image ID, or digest, of the bundle image, such as
c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41. <existing_index_image>-
Specifies the previously pushed image, such as
abc-redhat-operator-index. <existing_tag>-
Specifies a previously pushed image tag, such as
4.12. <updated_tag>-
Specifies the image tag to apply to the updated index image, such as
4.12.1.
Example command
$ opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.12 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.12.1 \ --pull-tool podmanPush the updated index image:
$ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:
$ oc get packagemanifests -n openshift-marketplace
4.9.3.3. Filtering a SQLite-based index image Link kopierenLink in die Zwischenablage kopiert!
An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.
Prerequisites
-
version 1.9.3+
podman -
grpcurl(third-party command-line tool) -
opm - Access to a registry that supports Docker v2-2
Procedure
Authenticate with your target registry:
$ podman login <target_registry>Determine the list of packages you want to include in your pruned index.
Run the source index image that you want to prune in a container. For example:
$ podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.12Example output
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.12... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051In a separate terminal session, use the
command to get a list of the packages provided by the index:grpcurl$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.outInspect the
file and identify which package names from this list you want to keep in your pruned index. For example:packages.outExample snippets of packages list
... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ...-
In the terminal session where you executed the command, press Ctrl and C to stop the container process.
podman run
Run the following command to prune the source index of all but the specified packages:
$ opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.12 \1 -p advanced-cluster-management,jaeger-product,quay-operator \2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.124 Run the following command to push the new index image to your target registry:
$ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.12where
is any existing namespace on the registry.<namespace>
4.9.4. Catalog sources and pod security admission Link kopierenLink in die Zwischenablage kopiert!
Pod security admission was introduced in OpenShift Container Platform 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the
opm
In OpenShift Container Platform 4.12, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to
legacy
Default restricted enforcement for all namespaces is planned for inclusion in a future OpenShift Container Platform release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.
If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in OpenShift Container Platform 4.12.
However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future OpenShift Container Platform releases.
As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:
- Migrate your catalog to the file-based catalog format.
-
Update your catalog image with a version of the CLI tool released with OpenShift Container Platform 4.11 or later.
opm
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.
4.9.4.1. Migrating SQLite database catalogs to the file-based catalog format Link kopierenLink in die Zwischenablage kopiert!
You can update your deprecated SQLite database format catalogs to the file-based catalog format.
Prerequisites
- SQLite database catalog source
- Cluster administrator permissions
-
Latest version of the CLI tool released with OpenShift Container Platform 4.12 on workstation
opm
Procedure
Migrate your SQLite database catalog to a file-based catalog by running the following command:
$ opm migrate <registry_image> <fbc_directory>Generate a Dockerfile for your file-based catalog by running the following command:
$ opm generate dockerfile <fbc_directory> \ --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.12
Next steps
- The generated Dockerfile can be built, tagged, and pushed to your registry.
4.9.4.2. Rebuilding SQLite database catalog images Link kopierenLink in die Zwischenablage kopiert!
You can rebuild your SQLite database catalog image with the latest version of the
opm
Prerequisites
- SQLite database catalog source
- Cluster administrator permissions
-
Latest version of the CLI tool released with OpenShift Container Platform 4.12 on workstation
opm
Procedure
Run the following command to rebuild your catalog with a more recent version of the
CLI tool:opm$ opm index add --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.12 \ --from-index <your_registry_image> \ --bundles "" -t \<your_registry_image>
4.9.4.3. Configuring catalogs to run with elevated permissions Link kopierenLink in die Zwischenablage kopiert!
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:
- Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.
- Label the catalog source namespace for baseline or privileged pod security enforcement.
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
Prerequisites
- SQLite database catalog source
- Cluster administrator permissions
-
Target namespace that supports running pods with the elevated pod security admission standard of or
baselineprivileged
Procedure
Edit the
definition by setting theCatalogSourcelabel tospec.grpcPodConfig.securityContextConfig, as shown in the following example:legacyExample
CatalogSourcedefinitionapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latestTipIn OpenShift Container Platform 4.12, the
field is set tospec.grpcPodConfig.securityContextConfigby default. In a future release of OpenShift Container Platform, it is planned that the default setting will change tolegacy. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field torestricted.legacyEdit your
file to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:<namespace>.yamlExample
<namespace>.yamlfileapiVersion: v1 kind: Namespace metadata: ... labels: security.openshift.io/scc.podSecurityLabelSync: "false"1 openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: baseline2 name: "<namespace_name>"- 1
- Turn off pod security label synchronization by adding the
security.openshift.io/scc.podSecurityLabelSync=falselabel to the namespace. - 2
- Apply the pod security admission
pod-security.kubernetes.io/enforcelabel. Set the label tobaselineorprivileged. Use thebaselinepod security profile unless other workloads in the namespace require aprivilegedprofile.
4.9.5. Adding a catalog source to a cluster Link kopierenLink in die Zwischenablage kopiert!
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a
CatalogSource
Alternatively, you can use the web console to manage catalog sources. From the Administration
Prerequisites
- An index image built and pushed to a registry.
Procedure
Create a
object that references your index image.CatalogSourceModify the following to your specifications and save it as a
file:catalogSource.yamlapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace1 annotations: olm.catalogImageTemplate:2 "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode>3 image: <registry>/<namespace>/<index_image_name>:<tag>4 displayName: My Operator Catalog publisher: <publisher_name>5 updateStrategy: registryPoll:6 interval: 30m- 1
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplacenamespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 2
- Optional: Set the
olm.catalogImageTemplateannotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. - 3
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted.NoteIf your catalog cannot run with
permissions, it is recommended that you manually set this field torestricted.legacy - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4.12, the catalog source pod uses an image pull policy ofAlways, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>, the image pull policy isIfNotPresent, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
object:CatalogSource$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplaceExample output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26hCheck the catalog source:
$ oc get catalogsource -n openshift-marketplaceExample output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5sCheck the package manifest:
$ oc get packagemanifest -n openshift-marketplaceExample output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.9.6. Accessing images for Operators from private registries Link kopierenLink in die Zwischenablage kopiert!
If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation.
Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access.
The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access:
- Index images
-
A
CatalogSourceobject can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access. - Bundle images
- Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access.
- Operator and Operand images
If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed.
Instead, the authentication details can be added to the global cluster pull secret in the
namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to theopenshift-configservice accounts of the target tenant namespaces.default
Prerequisites
At least one of the following hosted in a private registry:
- An index image or catalog image.
- An Operator bundle image.
- An Operator or Operand image.
Procedure
Create a secret for each required private registry.
Log in to the private registry to create or update your registry credentials file:
$ podman login <registry>:<port>NoteThe file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the
CLI, the default location ispodman. For the${XDG_RUNTIME_DIR}/containers/auth.jsonCLI, the default location isdocker./root/.docker/config.jsonIt is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a
object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull.CatalogSourceA registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example:
File storing credentials for multiple registries
{ "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" }, "quay.io": { "auth": "fegdsRib21iMQ==" }, "https://quay.io/my-namespace/my-user/my-image": { "auth": "eWfjwsDdfsa221==" }, "https://quay.io/my-namespace/my-user": { "auth": "feFweDdscw34rR==" }, "https://quay.io/my-namespace": { "auth": "frwEews4fescyq==" } } }Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods:
-
Use the command to remove credentials for additional registries until only the one registry you want remains.
podman logout <registry> Edit your registry credentials file and separate the registry details to be stored in multiple files. For example:
File storing credentials for one registry
{ "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" } } }File storing credentials for another registry
{ "auths": { "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } }
-
Use the
Create a secret in the
namespace that contains the authentication credentials for a private registry:openshift-marketplace$ oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjsonRepeat this step to create additional secrets for any other required private registries, updating the
flag to specify another registry credentials file path.--from-file
Create or update an existing
object to reference one or more secrets:CatalogSourceapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets:1 - "<secret_name_1>" - "<secret_name_2>" grpcPodConfig: securityContextConfig: <security_mode>2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m- 1
- Add a
spec.secretssection and specify any required secrets. - 2
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted.NoteIf your catalog cannot run with
permissions, it is recommended that you manually set this field torestricted.legacy
If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces.
To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the
namespace.openshift-configWarningCluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster.
Extract the
file from the global pull secret:.dockerconfigjson$ oc extract secret/pull-secret -n openshift-config --confirmUpdate the
file with your authentication credentials for the required private registry or registries and save it as a new file:.dockerconfigjson$ cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \1 > new_dockerconfigjson- 1
- Replace
<registry>:<port>/<namespace>with the private registry details and<token>with your authentication credentials.
Update the global pull secret with the new file:
$ oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson
To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace.
Recreate the secret that you created for the
in the tenant namespace:openshift-marketplace$ oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjsonVerify the name of the service account for the Operator by searching the tenant namespace:
$ oc get sa -n <tenant_namespace>1 - 1
- If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the
openshift-operatorsnamespace.
Example output
NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s1 - 1
- Service account for an installed etcd Operator.
Link the secret to the service account for the Operator:
$ oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull
4.9.7. Disabling the default OperatorHub catalog sources Link kopierenLink in die Zwischenablage kopiert!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs.
Procedure
Disable the sources for the default catalogs by adding
to thedisableAllDefaultSources: trueobject:OperatorHub$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration
4.9.8. Removing custom catalogs Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.
Procedure
-
In the Administrator perspective of the web console, navigate to Administration
Cluster Settings. - Click the Configuration tab, and then click OperatorHub.
- Click the Sources tab.
-
Select the Options menu
for the catalog that you want to remove, and then click Delete CatalogSource.
4.10. Using Operator Lifecycle Manager on restricted networks Link kopierenLink in die Zwischenablage kopiert!
For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity.
However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry.
The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped, host, which requires removable media to physically move the mirrored content to the disconnected environment.
This guide describes the following process that is required to enable OLM in restricted networks:
- Disable the default remote OperatorHub sources for OLM.
- Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry.
- Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources.
After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.
While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself meeting the following criteria:
-
List any related images, or other container images that the Operator might require to perform their functions, in the parameter of its
relatedImages(CSV) object.ClusterServiceVersion - Reference all specified images by a digest (SHA) and not by a tag.
You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections:
| Type | Containerized application |
| Deployment method | Operator |
| Infrastructure features | Disconnected |
4.10.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
Log in to your OpenShift Container Platform cluster as a user with privileges.
cluster-admin
If you are using OLM in a restricted network on IBM Z, you must have at least 12 GB allocated to the directory where you place your registry.
4.10.2. Disabling the default OperatorHub catalog sources Link kopierenLink in die Zwischenablage kopiert!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources.
Procedure
Disable the sources for the default catalogs by adding
to thedisableAllDefaultSources: trueobject:OperatorHub$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration
4.10.3. Mirroring an Operator catalog Link kopierenLink in die Zwischenablage kopiert!
For instructions about mirroring Operator catalogs for use with disconnected clusters, see Installing
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The
opm
Many of the
opm
opm index prune
4.10.4. Adding a catalog source to a cluster Link kopierenLink in die Zwischenablage kopiert!
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a
CatalogSource
Alternatively, you can use the web console to manage catalog sources. From the Administration
Prerequisites
- An index image built and pushed to a registry.
Procedure
Create a
object that references your index image. If you used theCatalogSourcecommand to mirror your catalog to a target registry, you can use the generatedoc adm catalog mirrorfile in your manifests directory as a starting point.catalogSource.yamlModify the following to your specifications and save it as a
file:catalogSource.yamlapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog1 namespace: openshift-marketplace2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode>3 image: <registry>/<namespace>/redhat-operator-index:v4.124 displayName: My Operator Catalog publisher: <publisher_name>5 updateStrategy: registryPoll:6 interval: 30m- 1
- If you mirrored content to local files before uploading to a registry, remove any backslash (
/) characters from themetadata.namefield to avoid an "invalid resource name" error when you create the object. - 2
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplacenamespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 3
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted.NoteIf your catalog cannot run with
permissions, it is recommended that you manually set this field torestricted.legacy - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4.12, the catalog source pod uses an image pull policy ofAlways, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>, the image pull policy isIfNotPresent, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
object:CatalogSource$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplaceExample output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26hCheck the catalog source:
$ oc get catalogsource -n openshift-marketplaceExample output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5sCheck the package manifest:
$ oc get packagemanifest -n openshift-marketplaceExample output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.11. Catalog source pod scheduling Link kopierenLink in die Zwischenablage kopiert!
When an Operator Lifecycle Manager (OLM) catalog source of source type
grpc
spec.image
-
Only the node selector
kubernetes.io/os=linux - No priority class name
- No tolerations
As an administrator, you can override these values by modifying fields in the
CatalogSource
spec.grpcPodConfig
4.11.1. Overriding the node selector for catalog source pods Link kopierenLink in die Zwischenablage kopiert!
Prequisites
-
object of source type
CatalogSourcewithgrpcdefinedspec.image
Procedure
Edit the
object and add or modify theCatalogSourcesection to include the following:spec.grpcPodConfiggrpcPodConfig: nodeSelector: custom_label: <label>where
is the label for the node selector that you want catalog source pods to use for scheduling.<label>
4.11.2. Overriding the priority class name for catalog source pods Link kopierenLink in die Zwischenablage kopiert!
Prequisites
-
object of source type
CatalogSourcewithgrpcdefinedspec.image
Procedure
Edit the
object and add or modify theCatalogSourcesection to include the following:spec.grpcPodConfiggrpcPodConfig: priorityClassName: <priority_class>where
is one of the following:<priority_class>-
One of the default priority classes provided by Kubernetes: or
system-cluster-criticalsystem-node-critical -
An empty set () to assign the default priority
"" - A pre-existing and custom defined priority class
-
One of the default priority classes provided by Kubernetes:
Previously, the only pod scheduling parameter that could be overriden was
priorityClassName
operatorframework.io/priorityclass
CatalogSource
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: example-catalog
namespace: openshift-marketplace
annotations:
operatorframework.io/priorityclass: system-cluster-critical
If a
CatalogSource
spec.grpcPodConfig.priorityClassName
4.11.3. Overriding tolerations for catalog source pods Link kopierenLink in die Zwischenablage kopiert!
Prequisites
-
object of source type
CatalogSourcewithgrpcdefinedspec.image
Procedure
Edit the
object and add or modify theCatalogSourcesection to include the following:spec.grpcPodConfiggrpcPodConfig: tolerations: - key: "<key_name>" operator: "<operator_type>" value: "<value>" effect: "<effect>"
4.12. Managing platform Operators (Technology Preview) Link kopierenLink in die Zwischenablage kopiert!
A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster administrator, you can manage platform Operators by using the
PlatformOperator
The platform Operator type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.12.1. About platform Operators Link kopierenLink in die Zwischenablage kopiert!
Operator Lifecycle Manager (OLM) introduces a new type of Operator called platform Operators. A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster administrator, you can use platform Operators to further customize your OpenShift Container Platform installation to meet your requirements and use cases.
Using the existing cluster capabilities feature in OpenShift Container Platform, cluster administrators can already disable a subset of Cluster Version Operator-based (CVO) components considered non-essential to the initial payload prior to cluster installation. Platform Operators iterate on this model by providing additional customization options. Through the platform Operator mechanism, which relies on resources from the RukPak component, OLM-based Operators can now be installed at cluster installation time and can block cluster rollout if the Operator fails to install successfully.
In OpenShift Container Platform 4.12, this Technology Preview release focuses on the basic platform Operator mechanism and builds a foundation for expanding the concept in upcoming releases. You can use the cluster-wide
PlatformOperator
TechPreviewNoUpgrade
4.12.1.1. Technology Preview restrictions for platform Operators Link kopierenLink in die Zwischenablage kopiert!
During the Technology Preview release of the platform Operators feature in OpenShift Container Platform 4.12, the following restrictions determine whether an Operator can be installed through the platform Operators mechanism:
-
Kubernetes manifests must be packaged using the Operator Lifecycle Manager (OLM) bundle format.
registry+v1 - The Operator cannot declare package or group/version/kind (GVK) dependencies.
-
The Operator cannot specify cluster service version (CSV) install modes other than
AllNamespaces -
The Operator cannot specify any or
Webhookdefinitions.APIService -
All package bundles must be in the catalog source.
redhat-operators
After considering these restrictions, the following Operators can be successfully installed:
| 3scale-operator | amq-broker-rhel8 |
| amq-online | amq-streams |
| ansible-cloud-addons-operator | apicast-operator |
| container-security-operator | eap |
| file-integrity-operator | gatekeeper-operator-product |
| integration-operator | jws-operator |
| kiali-ossm | node-healthcheck-operator |
| odf-csi-addons-operator | odr-hub-operator |
| openshift-custom-metrics-autoscaler-operator | openshift-gitops-operator |
| openshift-pipelines-operator-rh | quay-operator |
| red-hat-camel-k | rhpam-kogito-operator |
| service-registry-operator | servicemeshoperator |
| skupper-operator |
The following features are not available during this Technology Preview release:
- Automatically upgrading platform Operator packages after cluster rollout
- Extending the platform Operator mechanism to support any optional, CVO-based components
4.12.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
Access to an OpenShift Container Platform cluster using an account with permissions.
cluster-admin The
feature set enabled on the cluster.TechPreviewNoUpgradeWarningEnabling the
feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.TechPreviewNoUpgrade-
Only the catalog source enabled on the cluster. This is a restriction during the Technology Preview release.
redhat-operators -
The command installed on your workstation.
oc
4.12.3. Installing platform Operators during cluster creation Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can install platform Operators by providing
FeatureGate
PlatformOperator
Procedure
- Choose a platform Operator from the supported set of OLM-based Operators. For the list of this set and details on current limitations, see "Technology Preview restrictions for platform Operators".
-
Select a cluster installation method and follow the instructions through creating an file. For more details on preparing for a cluster installation, see "Selecting a cluster installation method and preparing it for users".
install-config.yaml After you have created the
file and completed any modifications to it, change to the directory that contains the installation program and create the manifests:install-config.yaml$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a
object YAML file in theFeatureGatedirectory that enables the<installation_directory>/manifests/feature set, for example aTechPreviewNoUpgradefile:feature-gate.yamlExample
feature-gate.yamlfileapiVersion: config.openshift.io/v1 kind: FeatureGate metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" name: cluster spec: featureSet: TechPreviewNoUpgrade1 - 1
- Enable the
TechPreviewNoUpgradefeature set.
Create a
object YAML file for your chosen platform Operator in thePlatformOperatordirectory, for example a<installation_directory>/manifests/file for the Red Hat OpenShift Service Mesh Operator:service-mesh-po.yamlExample
service-mesh-po.yamlfileapiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperatorWhen you are ready to complete the cluster install, refer to your chosen installation method and continue through running the
command.openshift-install create clusterDuring cluster creation, your provided manifests are used to enable the
feature set and install your chosen platform Operator.TechPreviewNoUpgradeImportantFailure of the platform Operator to successfully install will block the cluster installation process.
Verification
Check the status of the
platform Operator by running the following command:service-mesh-po$ oc get platformoperator service-mesh-po -o yamlExample output
... status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: "2022-10-24T17:24:40Z" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: "True"1 type: Installed- 1
- Wait until the
Installedstatus condition reportsTrue.
Verify that the
cluster Operator is reporting anplatform-operators-aggregatedstatus:Available=True$ oc get clusteroperator platform-operators-aggregated -o yamlExample output
... status: conditions: - lastTransitionTime: "2022-10-24T17:43:26Z" message: All platform operators are in a successful state reason: AsExpected status: "False" type: Progressing - lastTransitionTime: "2022-10-24T17:43:26Z" status: "False" type: Degraded - lastTransitionTime: "2022-10-24T17:43:26Z" message: All platform operators are in a successful state reason: AsExpected status: "True" type: Available
4.12.4. Installing platform Operators after cluster creation Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can install platform Operators after cluster creation on clusters that have enabled the
TechPreviewNoUpgrade
PlatformOperator
Procedure
- Choose a platform Operator from the supported set of OLM-based Operators. For the list of this set and details on current limitations, see "Technology Preview restrictions for platform Operators".
Create a
object YAML file for your chosen platform Operator, for example aPlatformOperatorfile for the Red Hat OpenShift Service Mesh Operator:service-mesh-po.yamlExample
sevice-mesh-po.yamlfileapiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperatorCreate the
object by running the following command:PlatformOperator$ oc apply -f service-mesh-po.yamlNoteIf your cluster does not have the
feature set enabled, the object creation fails with the following message:TechPreviewNoUpgradeerror: resource mapping not found for name: "service-mesh-po" namespace: "" from "service-mesh-po.yaml": no matches for kind "PlatformOperator" in version "platform.openshift.io/v1alpha1" ensure CRDs are installed first
Verification
Check the status of the
platform Operator by running the following command:service-mesh-po$ oc get platformoperator service-mesh-po -o yamlExample output
... status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: "2022-10-24T17:24:40Z" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: "True"1 type: Installed- 1
- Wait until the
Installedstatus condition reportsTrue.
Verify that the
cluster Operator is reporting anplatform-operators-aggregatedstatus:Available=True$ oc get clusteroperator platform-operators-aggregated -o yamlExample output
... status: conditions: - lastTransitionTime: "2022-10-24T17:43:26Z" message: All platform operators are in a successful state reason: AsExpected status: "False" type: Progressing - lastTransitionTime: "2022-10-24T17:43:26Z" status: "False" type: Degraded - lastTransitionTime: "2022-10-24T17:43:26Z" message: All platform operators are in a successful state reason: AsExpected status: "True" type: Available
4.12.5. Deleting platform Operators Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can delete existing platform Operators. Operator Lifecycle Manager (OLM) performs a cascading deletion. First, OLM removes the bundle deployment for the platform Operator, which then deletes any objects referenced in the
registry+v1
The platform Operator manager and bundle deployment provisioner only manage objects that are referenced in the bundle, but not objects subsequently deployed by any bundle workloads themselves. For example, if a bundle workload creates a namespace and the Operator is not configured to clean it up before the Operator is removed, it is outside of the scope of OLM to remove the namespace during platform Operator deletion.
Procedure
Get a list of installed platform Operators and find the name for the Operator you want to delete:
$ oc get platformoperatorDelete the
resource for the chosen Operator, for example, for the Quay Operator:PlatformOperator$ oc delete platformoperator quay-operatorExample output
platformoperator.platform.openshift.io "quay-operator" deleted
Verification
Verify the namespace for the platform Operator is eventually deleted, for example, for the Quay Operator:
$ oc get ns quay-operator-systemExample output
Error from server (NotFound): namespaces "quay-operator-system" not foundVerify the
cluster Operator continues to report anplatform-operators-aggregatedstatus:Available=True$ oc get co platform-operators-aggregatedExample output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.12.0-0 True False False 70s