Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Administrator tasks
4.1. Adding Operators to a cluster
Cluster administrators can install Operators to an OpenShift Container Platform cluster by subscribing Operators to namespaces with OperatorHub.
4.1.1. About Operator installation with OperatorHub
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose a specific namespace in which to install the Operator.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
4.1.2. Installing from OperatorHub using the web console
You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
advanced
to find the Advanced Cluster Management for Kubernetes Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
-
All namespaces on the cluster (default) installs the Operator in the default
openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available. - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
- Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
- Select an Update Channel (if more than one is available).
- Select Automatic or Manual approval strategy, as described earlier.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators
Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. NoteFor the All namespaces… installation mode, the status resolves to InstallSucceeded in the
openshift-operators
namespace, but the status is Copied if you check in other namespaces.If it does not:
-
Check the logs in any pods in the
openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the WorkloadsPods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
4.1.3. Installing from OperatorHub using the CLI
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc
command to create or update a Subscription
object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
Install the
oc
command to your local system.
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an
OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespaces
orSingleNamespace
mode. If the Operator you intend to install uses theAllNamespaces
, then theopenshift-operators
namespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one.NoteThe web console version of this procedure handles the creation of the
OperatorGroup
andSubscription
objects automatically behind the scenes for you when choosingSingleNamespace
mode.Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
:Example
OperatorGroup
objectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object YAML file to subscribe a namespace to an Operator, for examplesub.yaml
:Example
Subscription
objectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar
- 1
- For
AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Otherwise, specify the relevant single namespace forSingleNamespace
install mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources. - 6
- The
env
parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFrom
parameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumes
parameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMounts
parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator. - 10
- The
tolerations
parameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resources
parameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.
Create the
Subscription
object:$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Additional resources
4.1.4. Installing a specific version of an Operator
You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription
object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions
-
OpenShift CLI (
oc
) installed
Procedure
Create a
Subscription
object YAML file that subscribes a namespace to an Operator with a specific version by setting thestartingCSV
field. Set theinstallPlanApproval
field toManual
to prevent the Operator from automatically upgrading if a later version exists in the catalog.For example, the following
sub.yaml
file can be used to install the Red Hat Quay Operator specifically to version 3.4.0:Subscription with a specific starting Operator version
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2
- 1
- Set the approval strategy to
Manual
in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
Create the
Subscription
object:$ oc apply -f sub.yaml
- Manually approve the pending install plan to complete the Operator installation.
Additional resources
4.1.5. Pod placement of Operator workloads
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
-
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as
node-role.kubernetes.io/app
, that identifies the node or nodes. Otherwise, add a label, such asmyoperator
, by using a machine set or editing the node directly. You will use this label in a later step as the node selector on your project. -
If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a
myoperator:NoSchedule
taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. - Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
- For Operator pods
-
Administrators can create a
Subscription
object in the project. As a result, the Operator pods are placed on the specified nodes. - For Operand pods
- Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
Additional resources
- Adding taints and tolerations manually to nodes or with machine sets
- Creating project-wide node selectors
- Creating a project with a node selector and toleration
4.2. Updating installed Operators
As a cluster administrator, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
4.2.1. Preparing for an Operator update
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.
The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2
, 1.3
) or a release frequency (stable
, fast
).
You cannot change installed Operators to a channel that is older than the current channel.
Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:
You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included.
4.2.2. Changing the update channel for an Operator
You can change the update channel for an Operator by using the OpenShift Container Platform web console.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
Installed Operators. - Click the name of the Operator you want to change the update channel for.
- Click the Subscription tab.
- Click the name of the update channel under Channel.
- Click the newer update channel that you want to change to, then click Save.
For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date. For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.
4.2.3. Manually approving a pending Operator update
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any update requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
-
Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
4.3. Deleting Operators from a cluster
The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator. For more information, see Reinstalling Operators after failed uninstallation.
4.3.1. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster web console using an account with
cluster-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.2. Deleting Operators from a cluster using the CLI
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
oc
command installed on workstation.
Procedure
Check the current version of the subscribed Operator (for example,
jaeger
) in thecurrentCSV
field:$ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV
Example output
currentCSV: jaeger-operator.v1.8.2
Delete the subscription (for example,
jaeger
):$ oc delete subscription jaeger -n openshift-operators
Example output
subscription.operators.coreos.com "jaeger" deleted
Delete the CSV for the Operator in the target namespace using the
currentCSV
value from the previous step:$ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators
Example output
clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted
4.3.3. Refreshing failing subscriptions
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace
namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
Subscription
andClusterServiceVersion
objects from the namespace where the Operator is installed:$ oc get sub,csv -n <namespace>
Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
Delete the subscription:
$ oc delete subscription <subscription_name> -n <namespace>
Delete the cluster service version:
$ oc delete csv <csv_name> -n <namespace>
Get the names of any failing jobs and related config maps in the
openshift-marketplace
namespace:$ oc get job,configmap -n openshift-marketplace
Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
Delete the job:
$ oc delete job <job_name> -n openshift-marketplace
This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
$ oc delete configmap <configmap_name> -n openshift-marketplace
- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
4.4. Configuring Operator Lifecycle Manager features
The Operator Lifecycle Manager (OLM) controller is configured by an OLMConfig
custom resource (CR) named cluster
. Cluster administrators can modify this resource to enable or disable certain features.
This document outlines the features currently supported by OLM that are configured by the OLMConfig
resource.
4.4.1. Disabling copied CSVs
When an Operator is installed by Operator Lifecycle Manager (OLM), a simplified copy of its cluster service version (CSV) is created in every namespace that the Operator is configured to watch. These CSVs are known as copied CSVs and communicate to users which controllers are actively reconciling resource events in a given namespace.
When Operators are configured to use the AllNamespaces
install mode, versus targeting a single or specified set of namespaces, a copied CSV is created in every namespace on the cluster. On especially large clusters, with namespaces and installed Operators potentially in the hundreds or thousands, copied CSVs consume an untenable amount of resources, such as OLM’s memory usage, cluster etcd limits, and networking.
To support these larger clusters, cluster administrators can disable copied CSVs for Operators installed with the AllNamespaces
mode.
If you disable copied CSVs, a user’s ability to discover Operators in the OperatorHub and CLI is limited to Operators installed directly in the user’s namespace.
If an Operator is configured to reconcile events in the user’s namespace but is installed in a different namespace, the user cannot view the Operator in the OperatorHub or CLI. Operators affected by this limitation are still available and continue to reconcile events in the user’s namespace.
This behavior occurs for the following reasons:
- Copied CSVs identify the Operators available for a given namespace.
- Role-based access control (RBAC) scopes the user’s ability to view and discover Operators in the OperatorHub and CLI.
Procedure
Edit the
OLMConfig
object namedcluster
and set thespec.features.disableCopiedCSVs
field totrue
:$ oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF
- 1
- Disabled copied CSVs for
AllNamespaces
install mode Operators
Verification
When copied CSVs are disabled, OLM captures this information in an event in the Operator’s namespace:
$ oc get events
Example output
LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0
When the
spec.features.disableCopiedCSVs
field is missing or set tofalse
, OLM recreates the copied CSVs for all Operators installed with theAllNamespaces
mode and deletes the previously mentioned events.
Additional resources
4.5. Configuring proxy support in Operator Lifecycle Manager
If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.
Additional resources
- Configuring the cluster-wide proxy
- Configuring a custom PKI (custom CA certificate)
- Developing Operators that support proxy settings for Go, Ansible, and Helm
4.5.1. Overriding proxy settings of an Operator
If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator.
Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. - Select the Operator and click Install.
On the Install Operator page, modify the
Subscription
object to include one or more of the following environment variables in thespec
section:-
HTTP_PROXY
-
HTTPS_PROXY
-
NO_PROXY
For example:
Subscription
object with proxy setting overridesapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide
NoteThese environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.
OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.
-
- Click Install to make the Operator available to the selected namespaces.
After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:
$ oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2
Example output
- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ...
4.5.2. Injecting a custom CA certificate
When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - Custom CA certificate added to the cluster using a config map.
- Desired Operator installed and running on OLM.
Procedure
Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:
apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: "true" 2
After creating this config map, it is immediately populated with the certificate contents of the merged bundle.
Update your the
Subscription
object to include aspec.config
section that mounts thetrusted-ca
config map as a volume to each container within a pod that requires a custom CA:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true
NoteDeployments of an Operator can fail to validate the authority and display a
x509 certificate signed by unknown authority
error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set themountPath
as/etc/ssl/certs
for trusted-ca by using the subscription of an Operator.
4.6. Viewing Operator status
Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.
4.6.1. Operator subscription condition types
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
| The dependency resolution for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
Additional resources
4.6.2. Viewing Operator subscription status by using the CLI
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
$ oc get subs -n <operator_namespace>
Use the
oc describe
command to inspect aSubscription
resource:$ oc describe sub <subscription_name> -n <operator_namespace>
In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
4.6.3. Viewing Operator catalog source status by using the CLI
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:$ oc get catalogsources -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m
Use the
oc describe
command to get more details and status about a catalog source:$ oc describe catalogsource example-catalog -n openshift-marketplace
Example output
Name: example-catalog Namespace: openshift-marketplace ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace
In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m
When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Example output
Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull
In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
Additional resources
4.7. Managing Operator conditions
As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM).
4.7.1. Overriding Operator conditions
As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides
array override the conditions in the Spec.Conditions
array, allowing cluster administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM).
By default, the Spec.Overrides
array is not present in an OperatorCondition
object until it is added by a cluster administrator. The Spec.Conditions
array is also not present until it is either added by a user or as a result of custom Operator logic.
For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type
and status
to the Spec.Overrides
array in the OperatorCondition
object.
Prerequisites
-
An Operator with an
OperatorCondition
object, installed using OLM.
Procedure
Edit the
OperatorCondition
object for the Operator:$ oc edit operatorcondition <name>
Add a
Spec.Overrides
array to the object:Example Operator condition override
apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z"
- 1
- Allows the cluster administrator to change the upgrade readiness to
True
.
4.7.2. Updating your Operator to use Operator conditions
Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition
resource for each ClusterServiceVersion
resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition
owned by the Operator.
An Operator author can develop their Operator to use the operator-lib
library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page.
4.7.2.1. Setting defaults
In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition
resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true
. This provides the Operator with a grace period to update the condition to the correct state.
4.7.3. Additional resources
4.8. Allowing non-cluster administrators to install Operators
Cluster administrators can use Operator groups to allow regular users to install Operators.
Additional resources
4.8.1. Understanding Operator installation policy
Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin
privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV), and OLM consequently grants it to the Operator.
To ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM, Cluster administrators can manually audit Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts.
Cluster administrators can associate an Operator group with a service account that has a set of privileges granted to it. The service account sets policy on Operators to ensure they only run within predetermined boundaries by using role-based access control (RBAC) rules. As a result, the Operator is unable to do anything that is not explicitly permitted by those rules.
By employing Operator groups, users with enough privileges can install Operators with a limited scope. As a result, more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators.
Role-based access control (RBAC) for Subscription
objects is automatically granted to every user with the edit
or admin
role in a namespace. However, RBAC does not exist on OperatorGroup
objects; this absence is what prevents regular users from installing Operators. Pre-installing Operator groups is effectively what gives installation privileges.
Keep the following points in mind when associating an Operator group with a service account:
-
The
APIService
andCustomResourceDefinition
resources are always created by OLM using thecluster-admin
role. A service account associated with an Operator group should never be granted privileges to write these resources. - Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors so the cluster administrator can troubleshoot and resolve the issue.
4.8.1.1. Installation scenarios
When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios:
- A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account.
- A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted.
- For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted.
- A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator.
- A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator.
- A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted.
4.8.1.2. Installation workflow
When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow:
-
The given
Subscription
object is picked up by OLM. - OLM fetches the Operator group tied to this subscription.
- OLM determines that the Operator group has a service account specified.
- OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group.
- OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account.
4.8.2. Scoping Operator installations
To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group.
Using this example, a cluster administrator can confine a set of Operators to a designated namespace.
Procedure
Create a new namespace:
$ cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF
Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s).
$ cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF
The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions:
$ cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF
Create an
OperatorGroup
object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it.In addition, Operator groups allow a user to specify a service account. Specify the service account created in the previous step:
$ cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF
Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified.
Create a
Subscription
object in the designated namespace to install an Operator:$ cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF
Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors.
4.8.2.1. Fine-grained permissions
Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed:
-
ClusterServiceVersion
-
Subscription
-
Secret
-
ServiceAccount
-
Service
-
ClusterRole
andClusterRoleBinding
-
Role
andRoleBinding
To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account:
The following role is a generic example and additional rules might be required based on the specific Operator.
kind: Role rules: - apiGroups: ["operators.coreos.com"] resources: ["subscriptions", "clusterserviceversions"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "serviceaccounts"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["apps"] 1 resources: ["deployments"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] - apiGroups: [""] 2 resources: ["pods"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
In addition, if any Operator specifies a pull secret, the following permissions must also be added:
kind: ClusterRole 1
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
kind: Role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "update", "patch"]
- 1
- Required to get the secret from the OLM namespace.
4.8.3. Operator catalog access control
When an Operator catalog is created in the global catalog namespace openshift-marketplace
, the catalog’s Operators are made available cluster-wide to all namespaces. A catalog created in other namespaces only makes its Operators available in that same namespace of the catalog.
On clusters where non-cluster administrator users have been delegated Operator installation privileges, cluster administrators might want to further control or restrict the set of Operators those users are allowed to install. This can be achieved with the following actions:
- Disable all of the default global catalogs.
- Enable custom, curated catalogs in the same namespace where the relevant Operator groups have been pre-installed.
Additional resources
4.8.4. Troubleshooting permission failures
If an Operator installation fails due to lack of permissions, identify the errors using the following procedure.
Procedure
Review the
Subscription
object. Its status has an object referenceinstallPlanRef
that points to theInstallPlan
object that attempted to create the necessary[Cluster]Role[Binding]
object(s) for the Operator:apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: "117359" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23
Check the status of the
InstallPlan
object for any errors:apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: "2019-07-26T21:13:10Z" lastUpdateTime: "2019-07-26T21:13:10Z" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope' reason: InstallComponentFailed status: "False" type: Installed phase: Failed
The error message tells you:
-
The type of resource it failed to create, including the API group of the resource. In this case, it was
clusterroles
in therbac.authorization.k8s.io
group. - The name of the resource.
-
The type of error:
is forbidden
tells you that the user does not have enough permission to do the operation. - The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group.
The scope of the operation:
cluster scope
or not.The user can add the missing permission to the service account and then iterate.
NoteOperator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try.
-
The type of resource it failed to create, including the API group of the resource. In this case, it was
4.9. Managing custom catalogs
Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
Additional resources
4.9.1. Prerequisites
-
Install the
opm
CLI.
4.9.2. File-based catalogs
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.
For more details about the file-based catalog specification, see Operator Framework packaging format.
4.9.2.1. Creating a file-based catalog image
You can create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format. The opm
CLI provides tooling that helps initialize a catalog in the file-based format, render new records into it, and validate that the catalog is valid.
Prerequisites
-
opm
-
podman
version 1.9.3+ - A bundle image built and pushed to a registry that supports Docker v2-2
Procedure
Initialize a catalog for a file-based catalog:
Create a directory for the catalog:
$ mkdir <operator_name>-index
Create a Dockerfile that can build a catalog image:
Example
<operator_name>-index.Dockerfile
# The base image is expected to contain # /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 # Configure the entrypoint and command ENTRYPOINT ["/bin/opm"] CMD ["serve", "/configs"] # Copy declarative config root into image at /configs ADD <operator_name>-index /configs # Set DC-specific label for the location of the DC root directory # in the image LABEL operators.operatorframework.io.index.configs.v1=/configs
The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <operator_name>-index └── <operator_name>-index.Dockerfile
Populate the catalog with your package definition:
$ opm init <operator_name> \ 1 --default-channel=preview \ 2 --description=./README.md \ 3 --icon=./operator-icon.svg \ 4 --output yaml \ 5 > <operator_name>-index/index.yaml 6
This command generates an
olm.package
declarative config blob in the specified catalog configuration file.
Add a bundle to the catalog:
$ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --output=yaml \ >> <operator_name>-index/index.yaml 2
The
opm render
command generates a declarative config blob from the provided catalog images and bundle images.NoteChannels must contain at least one bundle.
Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your
<operator_name>-index/index.yaml
file:Example channel entry
--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1
- 1
- Ensure that you include the period (
.
) after<operator_name>
but before thev
in the version. Otherwise, the entry will fail to pass theopm validate
command.
Validate the file-based catalog:
Run the
opm validate
command against the catalog directory:$ opm validate <operator_name>-index
Check that the error code is
0
:$ echo $?
Example output
0
Build the catalog image:
$ podman build . \ -f <operator_name>-index.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>
Push the catalog image to a registry:
If required, authenticate with your target registry:
$ podman login <registry>
Push the catalog image:
$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
4.9.3. SQLite-based catalogs
The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.9.3.1. Creating a SQLite-based index image
You can create an index image based on the SQLite database format by using the opm
CLI.
Prerequisites
-
opm
-
podman
version 1.9.3+ - A bundle image built and pushed to a registry that supports Docker v2-2
Procedure
Start a new index:
$ opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1 --tag <registry>/<namespace>/<index_image_name>:<tag> \2 [--binary-image <registry_base_image>] 3
Push the index image to a registry.
If required, authenticate with your target registry:
$ podman login <registry>
Push the index image:
$ podman push <registry>/<namespace>/<index_image_name>:<tag>
4.9.3.2. Updating a SQLite-based index image
After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.
You can update an existing index image using the opm index add
command.
Prerequisites
-
opm
-
podman
version 1.9.3+ - An index image built and pushed to a registry.
- An existing catalog source referencing the index image.
Procedure
Update the existing index by adding bundle images:
$ opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3 --pull-tool podman 4
- 1
- The
--bundles
flag specifies a comma-separated list of additional bundle images to add to the index. - 2
- The
--from-index
flag specifies the previously pushed index. - 3
- The
--tag
flag specifies the image tag to apply to the updated index image. - 4
- The
--pull-tool
flag specifies the tool used to pull container images.
where:
<registry>
-
Specifies the hostname of the registry, such as
quay.io
ormirror.example.com
. <namespace>
-
Specifies the namespace of the registry, such as
ocs-dev
orabc
. <new_bundle_image>
-
Specifies the new bundle image to add to the registry, such as
ocs-operator
. <digest>
-
Specifies the SHA image ID, or digest, of the bundle image, such as
c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41
. <existing_index_image>
-
Specifies the previously pushed image, such as
abc-redhat-operator-index
. <existing_tag>
-
Specifies a previously pushed image tag, such as
4.10
. <updated_tag>
-
Specifies the image tag to apply to the updated index image, such as
4.10.1
.
Example command
$ opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 \ --pull-tool podman
Push the updated index image:
$ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:
$ oc get packagemanifests -n openshift-marketplace
4.9.3.3. Filtering a SQLite-based index image
An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.
Prerequisites
-
podman
version 1.9.3+ -
grpcurl
(third-party command-line tool) -
opm
- Access to a registry that supports Docker v2-2
Procedure
Authenticate with your target registry:
$ podman login <target_registry>
Determine the list of packages you want to include in your pruned index.
Run the source index image that you want to prune in a container. For example:
$ podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.10
Example output
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051
In a separate terminal session, use the
grpcurl
command to get a list of the packages provided by the index:$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
Inspect the
packages.out
file and identify which package names from this list you want to keep in your pruned index. For example:Example snippets of packages list
... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ...
-
In the terminal session where you executed the
podman run
command, press Ctrl and C to stop the container process.
Run the following command to prune the source index of all but the specified packages:
$ opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \1 -p advanced-cluster-management,jaeger-product,quay-operator \2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4
Run the following command to push the new index image to your target registry:
$ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10
where
<namespace>
is any existing namespace on the registry.
4.9.4. Adding a catalog source to a cluster
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource
object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Prerequisites
- An index image built and pushed to a registry.
Procedure
Create a
CatalogSource
object that references your index image.Modify the following to your specifications and save it as a
catalogSource.yaml
file:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m
- 1
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplace
namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 2
- Optional: Set the
olm.catalogImageTemplate
annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. - 3
- Specify your index image. If you specify a tag after the image name, for example
:v4.10
, the catalog source pod uses an image pull policy ofAlways
, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>
, the image pull policy isIfNotPresent
, meaning the pod pulls the image only if it does not already exist on the node. - 4
- Specify your name or an organization name publishing the catalog.
- 5
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSource
object:$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
Check the catalog source:
$ oc get catalogsource -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
Check the package manifest:
$ oc get packagemanifest -n openshift-marketplace
Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.9.5. Accessing images for Operators from private registries
If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation.
Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access.
The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access:
- Index images
-
A
CatalogSource
object can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access. - Bundle images
- Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access.
- Operator and Operand images
If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed.
Instead, the authentication details can be added to the global cluster pull secret in the
openshift-config
namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to thedefault
service accounts of the target tenant namespaces.
Prerequisites
At least one of the following hosted in a private registry:
- An index image or catalog image.
- An Operator bundle image.
- An Operator or Operand image.
Procedure
Create a secret for each required private registry.
Log in to the private registry to create or update your registry credentials file:
$ podman login <registry>:<port>
NoteThe file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the
podman
CLI, the default location is${XDG_RUNTIME_DIR}/containers/auth.json
. For thedocker
CLI, the default location is/root/.docker/config.json
.It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a
CatalogSource
object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull.A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example:
File storing credentials for multiple registries
{ "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" }, "quay.io": { "auth": "fegdsRib21iMQ==" }, "https://quay.io/my-namespace/my-user/my-image": { "auth": "eWfjwsDdfsa221==" }, "https://quay.io/my-namespace/my-user": { "auth": "feFweDdscw34rR==" }, "https://quay.io/my-namespace": { "auth": "frwEews4fescyq==" } } }
Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods:
-
Use the
podman logout <registry>
command to remove credentials for additional registries until only the one registry you want remains. Edit your registry credentials file and separate the registry details to be stored in multiple files. For example:
File storing credentials for one registry
{ "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" } } }
File storing credentials for another registry
{ "auths": { "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } }
-
Use the
Create a secret in the
openshift-marketplace
namespace that contains the authentication credentials for a private registry:$ oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson
Repeat this step to create additional secrets for any other required private registries, updating the
--from-file
flag to specify another registry credentials file path.
Create or update an existing
CatalogSource
object to reference one or more secrets:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - "<secret_name_1>" - "<secret_name_2>" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m
- 1
- Add a
spec.secrets
section and specify any required secrets.
If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces.
To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the
openshift-config
namespace.WarningCluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster.
Extract the
.dockerconfigjson
file from the global pull secret:$ oc extract secret/pull-secret -n openshift-config --confirm
Update the
.dockerconfigjson
file with your authentication credentials for the required private registry or registries and save it as a new file:$ cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \1 > new_dockerconfigjson
- 1
- Replace
<registry>:<port>/<namespace>
with the private registry details and<token>
with your authentication credentials.
Update the global pull secret with the new file:
$ oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson
To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace.
Recreate the secret that you created for the
openshift-marketplace
in the tenant namespace:$ oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson
Verify the name of the service account for the Operator by searching the tenant namespace:
$ oc get sa -n <tenant_namespace> 1
- 1
- If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the
openshift-operators
namespace.
Example output
NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1
- 1
- Service account for an installed etcd Operator.
Link the secret to the service account for the Operator:
$ oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull
Additional resources
- See What is a secret? for more information on the types of secrets, including those used for registry credentials.
- See Updating the global cluster pull secret for more details on the impact of changing this secret.
- See Allowing pods to reference images from other secured registries for more details on linking pull secrets to service accounts per namespace.
4.9.6. Disabling the default OperatorHub sources
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: true
to theOperatorHub
object:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration
4.9.7. Removing custom catalogs
As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.
Procedure
-
In the Administrator perspective of the web console, navigate to Administration
Cluster Settings. - Click the Configuration tab, and then click OperatorHub.
- Click the Sources tab.
- Select the Options menu for the catalog that you want to remove, and then click Delete CatalogSource.
4.10. Using Operator Lifecycle Manager on restricted networks
For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity.
However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry.
The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped, host, which requires removable media to physically move the mirrored content to the disconnected environment.
This guide describes the following process that is required to enable OLM in restricted networks:
- Disable the default remote OperatorHub sources for OLM.
- Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry.
- Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources.
After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.
While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself meeting the following criteria:
-
List any related images, or other container images that the Operator might require to perform their functions, in the
relatedImages
parameter of itsClusterServiceVersion
(CSV) object. - Reference all specified images by a digest (SHA) and not by a tag.
You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections:
Type | Containerized application |
Deployment method | Operator |
Infrastructure features | Disconnected |
Additional resources
4.10.1. Prerequisites
-
Log in to your OpenShift Container Platform cluster as a user with
cluster-admin
privileges. -
If you want to prune the default catalog and selectively mirror only a subset of Operators, install the
opm
CLI.
If you are using OLM in a restricted network on IBM Z, you must have at least 12 GB allocated to the directory where you place your registry.
4.10.2. Disabling the default OperatorHub sources
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: true
to theOperatorHub
object:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration
4.10.3. Filtering a SQLite-based index image
An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.
When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs.
For the steps in this procedure, the target registry is an existing mirror registry that is accessible by your workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators
catalog, but the process is the same for any index image.
Prerequisites
- Workstation with unrestricted network access
-
podman
version 1.9.3+ -
grpcurl
(third-party command-line tool) -
opm
- Access to a registry that supports Docker v2-2
Procedure
Authenticate with
registry.redhat.io
:$ podman login registry.redhat.io
Authenticate with your target registry:
$ podman login <target_registry>
Determine the list of packages you want to include in your pruned index.
Run the source index image that you want to prune in a container. For example:
$ podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.10
Example output
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051
In a separate terminal session, use the
grpcurl
command to get a list of the packages provided by the index:$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
Inspect the
packages.out
file and identify which package names from this list you want to keep in your pruned index. For example:Example snippets of packages list
... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ...
-
In the terminal session where you executed the
podman run
command, press Ctrl and C to stop the container process.
Run the following command to prune the source index of all but the specified packages:
$ opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \1 -p advanced-cluster-management,jaeger-product,quay-operator \2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4
Run the following command to push the new index image to your target registry:
$ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10
where
<namespace>
is any existing namespace on the registry. For example, you might create anolm-mirror
namespace to push all mirrored content to.
4.10.4. Mirroring an Operator catalog
For instructions about mirroring Operator catalogs for use with disconnected clusters, see Installing
4.10.5. Adding a catalog source to a cluster
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource
object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Prerequisites
- An index image built and pushed to a registry.
Procedure
Create a
CatalogSource
object that references your index image. If you used theoc adm catalog mirror
command to mirror your catalog to a target registry, you can use the generatedcatalogSource.yaml
file in your manifests directory as a starting point.Modify the following to your specifications and save it as a
catalogSource.yaml
file:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.10 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m
- 1
- If you mirrored content to local files before uploading to a registry, remove any backslash (
/
) characters from themetadata.name
field to avoid an "invalid resource name" error when you create the object. - 2
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplace
namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 3
- Specify your index image. If you specify a tag after the image name, for example
:v4.10
, the catalog source pod uses an image pull policy ofAlways
, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>
, the image pull policy isIfNotPresent
, meaning the pod pulls the image only if it does not already exist on the node. - 4
- Specify your name or an organization name publishing the catalog.
- 5
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSource
object:$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
Check the catalog source:
$ oc get catalogsource -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
Check the package manifest:
$ oc get packagemanifest -n openshift-marketplace
Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.10.6. Updating a SQLite-based index image
After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.
You can update an existing index image using the opm index add
command. For restricted networks, the updated content must also be mirrored again to the cluster.
Prerequisites
-
opm
-
podman
version 1.9.3+ - An index image built and pushed to a registry.
- An existing catalog source referencing the index image.
Procedure
Update the existing index by adding bundle images:
$ opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3 --pull-tool podman 4
- 1
- The
--bundles
flag specifies a comma-separated list of additional bundle images to add to the index. - 2
- The
--from-index
flag specifies the previously pushed index. - 3
- The
--tag
flag specifies the image tag to apply to the updated index image. - 4
- The
--pull-tool
flag specifies the tool used to pull container images.
where:
<registry>
-
Specifies the hostname of the registry, such as
quay.io
ormirror.example.com
. <namespace>
-
Specifies the namespace of the registry, such as
ocs-dev
orabc
. <new_bundle_image>
-
Specifies the new bundle image to add to the registry, such as
ocs-operator
. <digest>
-
Specifies the SHA image ID, or digest, of the bundle image, such as
c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41
. <existing_index_image>
-
Specifies the previously pushed image, such as
abc-redhat-operator-index
. <existing_tag>
-
Specifies a previously pushed image tag, such as
4.10
. <updated_tag>
-
Specifies the image tag to apply to the updated index image, such as
4.10.1
.
Example command
$ opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 \ --pull-tool podman
Push the updated index image:
$ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
Follow the steps in the Mirroring an Operator catalog procedure again to mirror the updated content. However, when you get to the step about creating the
ImageContentSourcePolicy
(ICSP) object, use theoc replace
command instead of theoc create
command. For example:$ oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml
This change is required because the object already exists and must be updated.
NoteNormally, the
oc apply
command can be used to update existing objects that were previously created usingoc apply
. However, due to a known issue regarding the size of themetadata.annotations
field in ICSP objects, theoc replace
command must be used for this step currently.After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:
$ oc get packagemanifests -n openshift-marketplace
Additional resources
4.11. Catalog source pod scheduling
When an Operator Lifecycle Manager (OLM) catalog source of source type grpc
defines a spec.image
, the Catalog Operator creates a pod that serves the defined image content. By default, this pod defines the following in its spec:
-
Only the
kubernetes.io/os=linux
node selector - No priority class name
- No tolerations
As an administrator, you can override these values by modifying fields in the CatalogSource
object’s optional spec.grpcPodConfig
section.
Additional resources
4.11.1. Overriding the node selector for catalog source pods
Prequisites
-
CatalogSource
object of source typegrpc
withspec.image
defined
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: nodeSelector: custom_label: <label>
where
<label>
is the label for the node selector that you want catalog source pods to use for scheduling.
Additional resources
4.11.2. Overriding the priority class name for catalog source pods
Prequisites
-
CatalogSource
object of source typegrpc
withspec.image
defined
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: priorityClassName: <priority_class>
where
<priority_class>
is one of the following:-
One of the default priority classes provided by Kubernetes:
system-cluster-critical
orsystem-node-critical
-
An empty set (
""
) to assign the default priority - A pre-existing and custom defined priority class
-
One of the default priority classes provided by Kubernetes:
Previously, the only pod scheduling parameter that could be overriden was priorityClassName
. This was done by adding the operatorframework.io/priorityclass
annotation to the CatalogSource
object. For example:
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical
If a CatalogSource
object defines both the annotation and spec.grpcPodConfig.priorityClassName
, the annotation takes precedence over the configuration parameter.
Additional resources
4.11.3. Overriding tolerations for catalog source pods
Prequisites
-
CatalogSource
object of source typegrpc
withspec.image
defined
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: tolerations: - key: "<key_name>" operator: "<operator_type>" value: "<value>" effect: "<effect>"
Additional resources