Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Administrator tasks
4.1. Adding Operators to a cluster
Using Operator Lifecycle Manager (OLM), administrators with the dedicated-admin
role can install OLM-based Operators to an OpenShift Dedicated cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.1.1. About Operator installation with OperatorHub
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a dedicated-admin
, you can install an Operator from OperatorHub by using the OpenShift Dedicated web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a
dedicated-admin
, you must then manually approve that update request to have the Operator updated to the new version.
Additional resources
4.1.2. Installing from OperatorHub by using the web console
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Dedicated web console.
Prerequisites
-
Access to an OpenShift Dedicated cluster using an account with the
dedicated-admin
role.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
advanced
to find the Advanced Cluster Management for Kubernetes Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page, configure your Operator installation:
If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.
NoteThe version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel.
Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces.
Confirm the installation mode for the Operator:
-
All namespaces on the cluster (default) installs the Operator in the default
openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available. - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
For clusters on cloud providers with token authentication enabled:
- If the cluster uses AWS Security Token Service (STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role’s ARN, follow the procedure described in Preparing AWS account.
- If the cluster uses Microsoft Entra Workload ID (Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate fields.
- If the cluster uses Google Cloud Platform Workload Identity (GCP Workload Identity / Federated Identity Mode in the web console), add the project number, pool ID, provider ID, and service account email in the appropriate fields.
For Update approval, select either the Automatic or Manual approval strategy.
ImportantIf the web console shows that the cluster uses AWS STS, Microsoft Entra Workload ID, or GCP Workload Identity, you must set Update approval to Manual.
Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update.
Click Install to make the Operator available to the selected namespaces on this OpenShift Dedicated cluster:
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
Verification
After the upgrade status of the subscription is Up to date, select Operators
Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. NoteFor the All namespaces… installation mode, the status resolves to Succeeded in the
openshift-operators
namespace, but the status is Copied if you check in other namespaces.If it does not:
-
Check the logs in any pods in the
openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the WorkloadsPods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
When the Operator is installed, the metadata indicates which channel and version are installed.
NoteThe Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context.
Additional resources
4.1.3. Installing from OperatorHub by using the CLI
Instead of using the OpenShift Dedicated web console, you can install an Operator from OperatorHub by using the CLI. Use the oc
command to create or update a Subscription
object.
For SingleNamespace
install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.
In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup
and Subscription
objects automatically when choosing SingleNamespace
mode.
Prerequisites
-
Access to an OpenShift Dedicated cluster using an account with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example 4.1. Example output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
Example 4.2. Example output
# ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4
TipYou can print an Operator’s version and channel information in YAML format by running the following command:
$ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
$ oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml
ImportantIf you do not specify the Operator’s catalog, running the
oc get packagemanifest
andoc describe packagemanifest
commands might return a package from an unexpected catalog if the following conditions are met:- Multiple catalogs are installed in the same namespace.
- The catalogs contain the same Operators or Operators with the same name.
If the Operator you intend to install supports the
AllNamespaces
install mode, and you choose to use this mode, skip this step, because theopenshift-operators
namespace already has an appropriate Operator group in place by default, calledglobal-operators
.If the Operator you intend to install supports the
SingleNamespace
install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps:ImportantYou can only have one Operator group per namespace. For more information, see "Operator groups".
Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
, forSingleNamespace
install mode:Example
OperatorGroup
object forSingleNamespace
install modeapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object to subscribe a namespace to an Operator:Create a YAML file for the
Subscription
object, for examplesubscription.yaml
:NoteIf you want to subscribe to a specific version of an Operator, set the
startingCSV
field to the desired version and set theinstallPlanApproval
field toManual
to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "ExampleSubscription
object with a specific starting Operator version".Example 4.3. Example
Subscription
objectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar
- 1
- For default
AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Alternatively, you can specify a custom global namespace, if you have created one. ForSingleNamespace
install mode usage, specify the relevant single namespace. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources. - 6
- The
env
parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. - 7
- The
envFrom
parameter defines a list of sources to populate environment variables in the container. - 8
- The
volumes
parameter defines a list of volumes that must exist on the pod created by OLM. - 9
- The
volumeMounts
parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator. - 10
- The
tolerations
parameter defines a list of tolerations for the pod created by OLM. - 11
- The
resources
parameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.
Example 4.4. Example
Subscription
object with a specific starting Operator versionapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2
- 1
- Set the approval strategy to
Manual
in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
For clusters on cloud providers with token authentication enabled, such as Amazon Web Services (AWS) Security Token Service (STS), Microsoft Entra Workload ID, or Google Cloud Platform Workload Identity, configure your
Subscription
object by following these steps:Ensure the
Subscription
object is set to manual update approvals:Example 4.5. Example
Subscription
object with manual update approvalskind: Subscription # ... spec: installPlanApproval: Manual 1
- 1
- Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update.
Include the relevant cloud provider-specific fields in the
Subscription
object’sconfig
section:If the cluster is in AWS STS mode, include the following fields:
Example 4.6. Example
Subscription
object with AWS STS variableskind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1
- 1
- Include the role ARN details.
If the cluster is in Workload ID mode, include the following fields:
Example 4.7. Example
Subscription
object with Workload ID variablesIf the cluster is in GCP Workload Identity mode, include the following fields:
Example 4.8. Example
Subscription
object with GCP Workload Identity variableskind: Subscription # ... spec: config: env: - name: AUDIENCE value: "<audience_url>" 1 - name: SERVICE_ACCOUNT_EMAIL value: "<service_account_email>" 2
where:
<audience>
Created in GCP by the administrator when they set up GCP Workload Identity, the
AUDIENCE
value must be a preformatted URL in the following format://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>
<service_account_email>
The
SERVICE_ACCOUNT_EMAIL
value is a GCP service account email that is impersonated during Operator operation, for example:<service_account_name>@<project_id>.iam.gserviceaccount.com
Create the
Subscription
object by running the following command:$ oc apply -f subscription.yaml
-
If you set the
installPlanApproval
field toManual
, manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update".
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Verification
Check the status of the
Subscription
object for your installed Operator by running the following command:$ oc describe subscription <subscription_name> -n <namespace>
If you created an Operator group for
SingleNamespace
install mode, check the status of theOperatorGroup
object by running the following command:$ oc describe operatorgroup <operatorgroup_name> -n <namespace>
4.1.4. Preparing for multiple instances of an Operator for multitenant clusters
As an administrator with the dedicated-admin
role, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".
In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. All instances of the Operator you want to install must be the same version across a given cluster.
ImportantFor more information on this and other limitations, see "Operators in multitenant clusters".
Procedure
Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. You can do this by creating a project. For example, if the tenant’s namespace is
team1
, you might create ateam1-operator
project:$ oc new-project team1-operator
Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the
spec.targetNamespaces
list:Define an
OperatorGroup
resource and save the YAML file, for example,team1-operatorgroup.yaml
:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1
Create the Operator group by running the following command:
$ oc create -f team1-operatorgroup.yaml
Next steps
Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.
NoteAfter completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.
Additional resources
4.1.5. Installing global Operators in custom namespaces
When installing Operators with the OpenShift Dedicated web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators
global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see "Multitenancy and Operator colocation".
As an administrator with the dedicated-admin
role, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
Before installing the Operator, create a namespace for the installation of your desired Operator. You can do this by creating a project. The namespace for this project will become the custom global namespace:
$ oc new-project global-operators
Create a custom global Operator group, which is an Operator group that watches all namespaces:
Define an
OperatorGroup
resource and save the YAML file, for example,global-operatorgroup.yaml
. Omit both thespec.selector
andspec.targetNamespaces
fields to make it a global Operator group, which selects all namespaces:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators
NoteThe
status.namespaces
of a created global Operator group contains the empty string (""
), which signals to a consuming Operator that it should watch all namespaces.Create the Operator group by running the following command:
$ oc create -f global-operatorgroup.yaml
Next steps
Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (
oc
). For a detailed procedure, see Installing from OperatorHub using the CLI.NoteWhen you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.
Additional resources
4.1.6. Pod placement of Operator workloads
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
-
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as
node-role.kubernetes.io/app
, that identifies the node or nodes. Otherwise, add a label, such asmyoperator
, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project. -
If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a
myoperator:NoSchedule
taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. - Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
- For Operator pods
-
Administrators can create a
Subscription
object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes. - For Operand pods
- Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
Additional resources
4.1.7. Controlling where an Operator is installed
By default, when you install an Operator, OpenShift Dedicated installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.
The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:
- If you want Operators that work together scheduled on the same host or on hosts located on the same rack
- If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues
You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription
object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.
The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:
Node affinity example that places the Operator pod on a specific node
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: 1
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-163-94.us-west-2.compute.internal
#...
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node named
ip-10-0-163-94.us-west-2.compute.internal
.
Node affinity example that places the Operator pod on a node with a specific platform
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
nodeAffinity: 1
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
- key: kubernetes.io/os
operator: In
values:
- linux
#...
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node with the
kubernetes.io/arch=arm64
andkubernetes.io/os=linux
labels.
Pod affinity example that places the Operator pod on one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAffinity: 1
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test
topologyKey: kubernetes.io/hostname
#...
- 1
- A pod affinity that places the Operator’s pod on a node that has pods with the
app=test
label.
Pod anti-affinity example that prevents the Operator pod from one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-custom-metrics-autoscaler-operator
namespace: openshift-keda
spec:
name: my-package
source: my-operators
sourceNamespace: operator-registries
config:
affinity:
podAntiAffinity: 1
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cpu
operator: In
values:
- high
topologyKey: kubernetes.io/hostname
#...
- 1
- A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the
cpu=high
label.
Procedure
To control the placement of an Operator pod, complete the following steps:
- Install the Operator as usual.
- If needed, ensure that your nodes are labeled to properly respond to the affinity.
Edit the Operator
Subscription
object to add an affinity:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #...
- 1
- Add a
nodeAffinity
,podAffinity
, orpodAntiAffinity
. See the Additional resources section that follows for information about creating the affinity.
Verification
To ensure that the pod is deployed on the specific node, run the following command:
$ oc get pods -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>
Additional resources
4.2. Updating installed Operators
As an administrator with the dedicated-admin
role, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Dedicated cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.2.1. Preparing for an Operator update
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.
The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2
, 1.3
) or a release frequency (stable
, fast
).
You cannot change installed Operators to a channel that is older than the current channel.
Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:
You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Dedicated. Cluster Version Operator-based Operators are not included.
4.2.2. Changing the update channel for an Operator
You can change the update channel for an Operator by using the OpenShift Dedicated web console.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
Installed Operators. - Click the name of the Operator you want to change the update channel for.
- Click the Subscription tab.
- Click the name of the update channel under Update channel.
- Click the newer update channel that you want to change to, then click Save.
For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date. For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.
4.2.3. Manually approving a pending Operator update
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the OpenShift Dedicated web console, navigate to Operators
Installed Operators. - Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
-
Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
4.3. Deleting Operators from a cluster
The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Dedicated cluster.
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator.
4.3.1. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an OpenShift Dedicated cluster web console using an account with
dedicated-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.2. Deleting Operators from a cluster using the CLI
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
You have access to an OpenShift Dedicated cluster using an account with
dedicated-admin
permissions. -
The OpenShift CLI (
oc
) is installed on your workstation.
Procedure
Ensure the latest version of the subscribed operator (for example,
serverless-operator
) is identified in thecurrentCSV
field.$ oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV
Example output
currentCSV: serverless-operator.v1.28.0
Delete the subscription (for example,
serverless-operator
):$ oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless
Example output
subscription.operators.coreos.com "serverless-operator" deleted
Delete the CSV for the Operator in the target namespace using the
currentCSV
value from the previous step:$ oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless
Example output
clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted
4.3.3. Refreshing failing subscriptions
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace
namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
Subscription
andClusterServiceVersion
objects from the namespace where the Operator is installed:$ oc get sub,csv -n <namespace>
Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
Delete the subscription:
$ oc delete subscription <subscription_name> -n <namespace>
Delete the cluster service version:
$ oc delete csv <csv_name> -n <namespace>
Get the names of any failing jobs and related config maps in the
openshift-marketplace
namespace:$ oc get job,configmap -n openshift-marketplace
Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
Delete the job:
$ oc delete job <job_name> -n openshift-marketplace
This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
$ oc delete configmap <configmap_name> -n openshift-marketplace
- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
4.4. Configuring proxy support in Operator Lifecycle Manager
If a global proxy is configured on the OpenShift Dedicated cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.
Additional resources
4.4.1. Overriding proxy settings of an Operator
If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Administrators with the dedicated-admin
role can also override these proxy settings by configuring the subscription of an Operator.
Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.
Prerequisites
-
Access to a OpenShift Dedicated cluster as a user with the
dedicated-admin
role.
Procedure
-
Navigate in the web console to the Operators
OperatorHub page. - Select the Operator and click Install.
On the Install Operator page, modify the
Subscription
object to include one or more of the following environment variables in thespec
section:-
HTTP_PROXY
-
HTTPS_PROXY
-
NO_PROXY
For example:
Subscription
object with proxy setting overridesapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide
NoteThese environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.
OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.
-
- Click Install to make the Operator available to the selected namespaces.
After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:
$ oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2
Example output
- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ...
4.4.2. Injecting a custom CA certificate
When an administrator with the dedicated-admin
role adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.
Prerequisites
-
Access to a OpenShift Dedicated cluster as a user with the
dedicated-admin
role. - Custom CA certificate added to the cluster using a config map.
- Desired Operator installed and running on OLM.
Procedure
Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:
apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: "true" 2
After creating this config map, it is immediately populated with the certificate contents of the merged bundle.
Update the
Subscription
object to include aspec.config
section that mounts thetrusted-ca
config map as a volume to each container within a pod that requires a custom CA:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true
NoteDeployments of an Operator can fail to validate the authority and display a
x509 certificate signed by unknown authority
error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set themountPath
as/etc/ssl/certs
for trusted-ca by using the subscription of an Operator.
4.5. Viewing Operator status
Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.
4.5.1. Operator subscription condition types
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
| The dependency resolution for a subscription has failed. |
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
Additional resources
4.5.2. Viewing Operator subscription status by using the CLI
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
$ oc get subs -n <operator_namespace>
Use the
oc describe
command to inspect aSubscription
resource:$ oc describe sub <subscription_name> -n <operator_namespace>
In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ...
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
4.5.3. Viewing Operator catalog source status by using the CLI
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:$ oc get catalogsources -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m
Use the
oc describe
command to get more details and status about a catalog source:$ oc describe catalogsource example-catalog -n openshift-marketplace
Example output
Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ...
In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m
When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Example output
Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull
In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
Additional resources
4.6. Managing Operator conditions
As an administrator with the dedicated-admin
role, you can manage Operator conditions by using Operator Lifecycle Manager (OLM).
4.6.1. Overriding Operator conditions
As an administrator with the dedicated-admin
role, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides
array override the conditions in the Spec.Conditions
array, allowing dedicated-admin
administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM).
By default, the Spec.Overrides
array is not present in an OperatorCondition
object until it is added by an administrator with the dedicated-admin
role . The Spec.Conditions
array is also not present until it is either added by a user or as a result of custom Operator logic.
For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type
and status
to the Spec.Overrides
array in the OperatorCondition
object.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
An Operator with an
OperatorCondition
object, installed using OLM.
Procedure
Edit the
OperatorCondition
object for the Operator:$ oc edit operatorcondition <name>
Add a
Spec.Overrides
array to the object:Example Operator condition override
apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z"
- 1
- Allows the
dedicated-admin
user to change the upgrade readiness toTrue
.
4.6.2. Updating your Operator to use Operator conditions
Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition
resource for each ClusterServiceVersion
resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition
owned by the Operator.
An Operator author can develop their Operator to use the operator-lib
library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page.
4.6.2.1. Setting defaults
In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition
resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true
. This provides the Operator with a grace period to update the condition to the correct state.
4.6.3. Additional resources
4.7. Managing custom catalogs
Administrators with the dedicated-admin
role and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Dedicated.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Dedicated that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Dedicated versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
Additional resources
4.7.1. Prerequisites
-
You have installed the
opm
CLI.
4.7.2. File-based catalogs
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.
As of OpenShift Dedicated 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Dedicated 4.6 through 4.10 released in the deprecated SQLite database format.
The opm
subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the opm
subcommands and flags for working with the SQLite database format, such as opm index prune
, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format.
4.7.2.1. Creating a file-based catalog image
You can use the opm
CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format.
Prerequisites
-
You have installed the
opm
CLI. -
You have
podman
version 1.9.3+. - A bundle image is built and pushed to a registry that supports Docker v2-2.
Procedure
Initialize the catalog:
Create a directory for the catalog by running the following command:
$ mkdir <catalog_dir>
Generate a Dockerfile that can build a catalog image by running the
opm generate dockerfile
command:$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4 1
- 1
- Specify the official Red Hat base image by using the
-i
flag, otherwise the Dockerfile uses the default upstream image.
The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3
Populate the catalog with the package definition for your Operator by running the
opm init
command:$ opm init <operator_name> \ 1 --default-channel=preview \ 2 --description=./README.md \ 3 --icon=./operator-icon.svg \ 4 --output yaml \ 5 > <catalog_dir>/index.yaml 6
This command generates an
olm.package
declarative config blob in the specified catalog configuration file.
Add a bundle to the catalog by running the
opm render
command:$ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --output=yaml \ >> <catalog_dir>/index.yaml 2
NoteChannels must contain at least one bundle.
Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your
<catalog_dir>/index.yaml
file:Example channel entry
--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1
- 1
- Ensure that you include the period (
.
) after<operator_name>
but before thev
in the version. Otherwise, the entry fails to pass theopm validate
command.
Validate the file-based catalog:
Run the
opm validate
command against the catalog directory:$ opm validate <catalog_dir>
Check that the error code is
0
:$ echo $?
Example output
0
Build the catalog image by running the
podman build
command:$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>
Push the catalog image to a registry:
If required, authenticate with your target registry by running the
podman login
command:$ podman login <registry>
Push the catalog image by running the
podman push
command:$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
Additional resources
4.7.2.2. Updating or filtering a file-based catalog image
You can use the opm
CLI to update or filter a catalog image that uses the file-based catalog format. By extracting the contents of an existing catalog image, you can modify the catalog as needed, for example:
- Adding packages
- Removing packages
- Updating existing package entries
- Detailing deprecation messages per package, channel, and bundle
You can then rebuild the image as an updated version of the catalog.
Prerequisites
You have the following on your workstation:
-
The
opm
CLI. -
podman
version 1.9.3+. - A file-based catalog image.
A catalog directory structure recently initialized on your workstation related to this catalog.
If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.
-
The
Procedure
Extract the contents of the catalog image in YAML format to an
index.yaml
file in your catalog directory:$ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \ -o yaml > <catalog_dir>/index.yaml
NoteAlternatively, you can use the
-o json
flag to output in JSON format.Modify the contents of the resulting
index.yaml
file to your specifications:ImportantAfter a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.
- To add an Operator, follow the steps for creating package, bundle, and channel entries in the "Creating a file-based catalog image" procedure.
To remove an Operator, delete the set of
olm.package
,olm.channel
, andolm.bundle
blobs that relate to the package. The following example shows a set that must be deleted to remove theexample-operator
package from the catalog:Example 4.9. Example removed entries
--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---
-
To add or update deprecation messages for an Operator, ensure there is a
deprecations.yaml
file in the same directory as the package’sindex.yaml
file. For information on thedeprecations.yaml
file format, see "olm.deprecations schema".
- Save your changes.
Validate the catalog:
$ opm validate <catalog_dir>
Rebuild the catalog:
$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>
Push the updated catalog image to a registry:
$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>
Verification
-
In the web console, navigate to the OperatorHub configuration resource in the Administration
Cluster Settings Configuration page. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.
For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.
-
After the catalog source is in a READY state, navigate to the Operators
OperatorHub page and check that the changes you made are reflected in the list of Operators.
4.7.3. SQLite-based catalogs
The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Dedicated and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Dedicated, refer to the Deprecated and removed features section of the OpenShift Dedicated release notes.
4.7.3.1. Creating a SQLite-based index image
You can create an index image based on the SQLite database format by using the opm
CLI.
Prerequisites
-
You have installed the
opm
CLI. -
You have
podman
version 1.9.3+. - A bundle image is built and pushed to a registry that supports Docker v2-2.
Procedure
Start a new index:
$ opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1 --tag <registry>/<namespace>/<index_image_name>:<tag> \2 [--binary-image <registry_base_image>] 3
Push the index image to a registry.
If required, authenticate with your target registry:
$ podman login <registry>
Push the index image:
$ podman push <registry>/<namespace>/<index_image_name>:<tag>
4.7.3.2. Updating a SQLite-based index image
After configuring OperatorHub to use a catalog source that references a custom index image, administrators with the dedicated-admin
role can keep the available Operators on their cluster up-to-date by adding bundle images to the index image.
You can update an existing index image using the opm index add
command.
Prerequisites
-
You have installed the
opm
CLI. -
You have
podman
version 1.9.3+. - An index image is built and pushed to a registry.
- You have an existing catalog source referencing the index image.
Procedure
Update the existing index by adding bundle images:
$ opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3 --pull-tool podman 4
- 1
- The
--bundles
flag specifies a comma-separated list of additional bundle images to add to the index. - 2
- The
--from-index
flag specifies the previously pushed index. - 3
- The
--tag
flag specifies the image tag to apply to the updated index image. - 4
- The
--pull-tool
flag specifies the tool used to pull container images.
where:
<registry>
-
Specifies the hostname of the registry, such as
quay.io
ormirror.example.com
. <namespace>
-
Specifies the namespace of the registry, such as
ocs-dev
orabc
. <new_bundle_image>
-
Specifies the new bundle image to add to the registry, such as
ocs-operator
. <digest>
-
Specifies the SHA image ID, or digest, of the bundle image, such as
c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41
. <existing_index_image>
-
Specifies the previously pushed image, such as
abc-redhat-operator-index
. <existing_tag>
-
Specifies a previously pushed image tag, such as
4
. <updated_tag>
-
Specifies the image tag to apply to the updated index image, such as
4.1
.
Example command
$ opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.1 \ --pull-tool podman
Push the updated index image:
$ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:
$ oc get packagemanifests -n openshift-marketplace
4.7.3.3. Filtering a SQLite-based index image
An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.
Prerequisites
-
You have
podman
version 1.9.3+. -
You have
grpcurl
(third-party command-line tool). -
You have installed the
opm
CLI. - You have access to a registry that supports Docker v2-2.
Procedure
Authenticate with your target registry:
$ podman login <target_registry>
Determine the list of packages you want to include in your pruned index.
Run the source index image that you want to prune in a container. For example:
$ podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4
Example output
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051
In a separate terminal session, use the
grpcurl
command to get a list of the packages provided by the index:$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
Inspect the
packages.out
file and identify which package names from this list you want to keep in your pruned index. For example:Example snippets of packages list
... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ...
-
In the terminal session where you executed the
podman run
command, press Ctrl and C to stop the container process.
Run the following command to prune the source index of all but the specified packages:
$ opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4 \1 -p advanced-cluster-management,jaeger-product,quay-operator \2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4 4
Run the following command to push the new index image to your target registry:
$ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4
where
<namespace>
is any existing namespace on the registry.
4.7.4. Catalog sources and pod security admission
Pod security admission was introduced in OpenShift Dedicated 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the opm
CLI tool released before OpenShift Dedicated 4.11 cannot run under restricted pod security enforcement.
In OpenShift Dedicated 4, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy
.
Default restricted enforcement for all namespaces is planned for inclusion in a future OpenShift Dedicated release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.
If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in OpenShift Dedicated 4.
However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future OpenShift Dedicated releases.
As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:
- Migrate your catalog to the file-based catalog format.
-
Update your catalog image with a version of the
opm
CLI tool released with OpenShift Dedicated 4.11 or later.
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of OpenShift Dedicated 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.
Additional resources
4.7.4.1. Migrating SQLite database catalogs to the file-based catalog format
You can update your deprecated SQLite database format catalogs to the file-based catalog format.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have the latest version of the
opm
CLI tool released with OpenShift Dedicated 4 on your workstation.
Procedure
Migrate your SQLite database catalog to a file-based catalog by running the following command:
$ opm migrate <registry_image> <fbc_directory>
Generate a Dockerfile for your file-based catalog by running the following command:
$ opm generate dockerfile <fbc_directory> \ --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4
Next steps
- The generated Dockerfile can be built, tagged, and pushed to your registry.
Additional resources
4.7.4.2. Rebuilding SQLite database catalog images
You can rebuild your SQLite database catalog image with the latest version of the opm
CLI tool that is released with your version of OpenShift Dedicated.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have the latest version of the
opm
CLI tool released with OpenShift Dedicated 4 on your workstation.
Procedure
Run the following command to rebuild your catalog with a more recent version of the
opm
CLI tool:$ opm index add --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4 \ --from-index <your_registry_image> \ --bundles "" -t \<your_registry_image>
4.7.4.3. Configuring catalogs to run with elevated permissions
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:
- Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.
- Label the catalog source namespace for baseline or privileged pod security enforcement.
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have a target namespace that supports running pods with the elevated pod security admission standard of
baseline
orprivileged
.
Procedure
Edit the
CatalogSource
definition by setting thespec.grpcPodConfig.securityContextConfig
label tolegacy
, as shown in the following example:Example
CatalogSource
definitionapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest
TipIn OpenShift Dedicated 4, the
spec.grpcPodConfig.securityContextConfig
field is set tolegacy
by default. In a future release of OpenShift Dedicated, it is planned that the default setting will change torestricted
. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field tolegacy
.Edit your
<namespace>.yaml
file to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:Example
<namespace>.yaml
fileapiVersion: v1 kind: Namespace metadata: ... labels: security.openshift.io/scc.podSecurityLabelSync: "false" 1 openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: baseline 2 name: "<namespace_name>"
- 1
- Turn off pod security label synchronization by adding the
security.openshift.io/scc.podSecurityLabelSync=false
label to the namespace. - 2
- Apply the pod security admission
pod-security.kubernetes.io/enforce
label. Set the label tobaseline
orprivileged
. Use thebaseline
pod security profile unless other workloads in the namespace require aprivileged
profile.
4.7.5. Adding a catalog source to a cluster
Adding a catalog source to an OpenShift Dedicated cluster enables the discovery and installation of Operators for users. Administrators with the dedicated-admin
role can create a CatalogSource
object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Alternatively, you can use the web console to manage catalog sources. From the Home CatalogSource
. You can create, update, delete, disable, and enable individual sources.
Prerequisites
- You built and pushed an index image to a registry.
-
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
Create a
CatalogSource
object that references your index image.Modify the following to your specifications and save it as a
catalogSource.yaml
file:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m
- 1
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplace
namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 2
- Optional: Set the
olm.catalogImageTemplate
annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. - 3
- Specify the value of
legacy
orrestricted
. If the field is not set, the default value islegacy
. In a future OpenShift Dedicated release, it is planned that the default value will berestricted
. If your catalog cannot run withrestricted
permissions, it is recommended that you manually set this field tolegacy
. - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4
, the catalog source pod uses an image pull policy ofAlways
, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>
, the image pull policy isIfNotPresent
, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSource
object:$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
Check the catalog source:
$ oc get catalogsource -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
Check the package manifest:
$ oc get packagemanifest -n openshift-marketplace
Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Dedicated web console.
Additional resources
4.7.6. Removing custom catalogs
As an administrator with the dedicated-admin
role, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
-
In the Administrator perspective of the web console, navigate to Home
Search. - Select a project from the Project: list.
- Select CatalogSource from the Resources list.
- Select the Options menu for the catalog that you want to remove, and then click Delete CatalogSource.
4.8. Catalog source pod scheduling
When an Operator Lifecycle Manager (OLM) catalog source of source type grpc
defines a spec.image
, the Catalog Operator creates a pod that serves the defined image content. By default, this pod defines the following in its specification:
-
Only the
kubernetes.io/os=linux
node selector. -
The default priority class name:
system-cluster-critical
. - No tolerations.
As an administrator, you can override these values by modifying fields in the CatalogSource
object’s optional spec.grpcPodConfig
section.
The Marketplace Operator, openshift-marketplace
, manages the default OperatorHub
custom resource’s (CR). This CR manages CatalogSource
objects. If you attempt to modify fields in the CatalogSource
object’s spec.grpcPodConfig
section, the Marketplace Operator automatically reverts these modifications.By default, if you modify fields in the spec.grpcPodConfig
section of the CatalogSource
object, the Marketplace Operator automatically reverts these changes.
To apply persistent changes to CatalogSource
object, you must first disable a default CatalogSource
object.
Additional resources
4.8.1. Disabling default CatalogSource objects at a local level
You can apply persistent changes to a CatalogSource
object, such as catalog source pods, at a local level, by disabling a default CatalogSource
object. Consider the default configuration in situations where the default CatalogSource
object’s configuration does not meet your organization’s needs. By default, if you modify fields in the spec.grpcPodConfig
section of the CatalogSource
object, the Marketplace Operator automatically reverts these changes.
The Marketplace Operator, openshift-marketplace
, manages the default custom resources (CRs) of the OperatorHub
. The OperatorHub
manages CatalogSource
objects.
To apply persistent changes to CatalogSource
object, you must first disable a default CatalogSource
object.
Procedure
To disable all the default
CatalogSource
objects at a local level, enter the following command:$ oc patch operatorhub cluster -p '{"spec": {"disableAllDefaultSources": true}}' --type=merge
NoteYou can also configure the default
OperatorHub
CR to either disable allCatalogSource
objects or disable a specific object.
Additional resources
4.8.2. Overriding the node selector for catalog source pods
Prerequisites
-
A
CatalogSource
object of source typegrpc
withspec.image
is defined. -
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: nodeSelector: custom_label: <label>
where
<label>
is the label for the node selector that you want catalog source pods to use for scheduling.
Additional resources
4.8.3. Overriding the priority class name for catalog source pods
Prerequisites
-
A
CatalogSource
object of source typegrpc
withspec.image
is defined. -
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: priorityClassName: <priority_class>
where
<priority_class>
is one of the following:-
One of the default priority classes provided by Kubernetes:
system-cluster-critical
orsystem-node-critical
-
An empty set (
""
) to assign the default priority - A pre-existing and custom defined priority class
-
One of the default priority classes provided by Kubernetes:
Previously, the only pod scheduling parameter that could be overriden was priorityClassName
. This was done by adding the operatorframework.io/priorityclass
annotation to the CatalogSource
object. For example:
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical
If a CatalogSource
object defines both the annotation and spec.grpcPodConfig.priorityClassName
, the annotation takes precedence over the configuration parameter.
Additional resources
4.8.4. Overriding tolerations for catalog source pods
Prerequisites
-
A
CatalogSource
object of source typegrpc
withspec.image
is defined. -
You have access to the cluster as a user with the
dedicated-admin
role.
Procedure
Edit the
CatalogSource
object and add or modify thespec.grpcPodConfig
section to include the following:grpcPodConfig: tolerations: - key: "<key_name>" operator: "<operator_type>" value: "<value>" effect: "<effect>"
4.9. Troubleshooting Operator issues
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
4.9.1. Operator subscription condition types
Subscriptions can report the following condition types:
Condition | Description |
---|---|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
| An install plan for a subscription is missing. |
| An install plan for a subscription is pending installation. |
| An install plan for a subscription has failed. |
| The dependency resolution for a subscription has failed. |
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
Additional resources
4.9.2. Viewing Operator subscription status by using the CLI
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List Operator subscriptions:
$ oc get subs -n <operator_namespace>
Use the
oc describe
command to inspect aSubscription
resource:$ oc describe sub <subscription_name> -n <operator_namespace>
In the command output, find the
Conditions
section for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthy
condition type has a status offalse
because all available catalog sources are healthy:Example output
Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ...
Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription
object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription
object.
4.9.3. Viewing Operator catalog source status by using the CLI
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplace
namespace, which is used for cluster-wide catalog sources:$ oc get catalogsources -n openshift-marketplace
Example output
NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m
Use the
oc describe
command to get more details and status about a catalog source:$ oc describe catalogsource example-catalog -n openshift-marketplace
Example output
Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ...
In the preceding example output, the last observed state is
TRANSIENT_FAILURE
. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplace
Example output
NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m
When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8z
pod isImagePullBackOff
. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describe
command to inspect a pod for more detailed information:$ oc describe pod example-catalog-bwt8z -n openshift-marketplace
Example output
Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull
In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
Additional resources
- gRPC documentation: States of Connectivity
4.9.4. Querying Operator pod status
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
).
Procedure
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
$ oc get clusteroperators
List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
$ oc get pod -n <operator_namespace>
Output a detailed Operator pod summary:
$ oc describe pod <operator_pod_name> -n <operator_namespace>
4.9.5. Gathering Operator logs
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc
). - You have the fully qualified domain names of the control plane or control plane machines.
Procedure
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
$ oc get pods -n <operator_namespace>
Review logs for an Operator pod:
$ oc logs pod/<pod_name> -n <operator_namespace>
If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>
with appropriate values.List pods on each control plane node:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
For any Operator pods not showing a
Ready
status, inspect the pod’s status in detail. Replace<operator_pod_id>
with the Operator pod’s ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
List containers related to an Operator pod:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
For any Operator container not showing a
Ready
status, inspect the container’s status in detail. Replace<container_id>
with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
Review the logs for any Operator containers not showing a
Ready
status. Replace<container_id>
with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
NoteOpenShift Dedicated 4 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gather
and otheroc
commands is sufficient instead. However, if the OpenShift Dedicated API is not available, or the kubelet is not properly functioning on the target node,oc
operations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>
.