Chapter 3. Administrator tasks


3.1. Adding Operators to a cluster

Cluster administrators can install Operators to an OpenShift Container Platform cluster by subscribing Operators to namespaces with OperatorHub.

3.1.1. Operator installation with OperatorHub

OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.

As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI.

During installation, you must determine the following initial settings for the Operator:

Installation Mode
Choose a specific namespace in which to install the Operator.
Update Channel
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
Approval Strategy

You can choose automatic or manual updates.

If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

3.1.2. Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.

Procedure

  1. Navigate in the web console to the Operators OperatorHub page.
  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Note

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    2. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    3. Select an Update Channel (if more than one is available).
    4. Select Automatic or Manual approval strategy, as described earlier.
  6. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.

      After approving on the Install Plan page, the subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
  7. After the upgrade status of the subscription is Up to date, select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    Note

    For the All namespaces…​ installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further.

3.1.3. Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
  • Install the oc command to your local system.

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace

    Example output

    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one.

    Note

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>

    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription object

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <subscription_name>
      namespace: openshift-operators 1
    spec:
      channel: <channel_name> 2
      name: <operator_name> 3
      source: redhat-operators 4
      sourceNamespace: openshift-marketplace 5

    1
    For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage.
    2
    Name of the channel to subscribe to.
    3
    Name of the Operator to subscribe to.
    4
    Name of the catalog source that provides the Operator.
    5
    Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources

3.1.4. Installing a specific version of an Operator

You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions
  • OpenShift CLI (oc) installed

Procedure

  1. Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog.

    For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0:

    Subscription with a specific starting Operator version

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: quay-operator
      namespace: quay
    spec:
      channel: quay-v3.4
      installPlanApproval: Manual 1
      name: quay-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: quay-operator.v3.4.0 2

    1
    Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
    2
    Set a specific version of an Operator CSV.
  2. Create the Subscription object:

    $ oc apply -f sub.yaml
  3. Manually approve the pending install plan to complete the Operator installation.

3.2. Upgrading installed Operators

As a cluster administrator, you can upgrade Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.

3.2.1. Changing the update channel for an Operator

The Subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Operator to start tracking and receiving updates from a newer channel, you can change the update channel in the Subscription.

The names of update channels in a Subscription can differ between Operators, but should follow a common convention within a given Operator. Some naming schemes include:

  • 4.3, 4.4, 4.5
  • stable-4.3, stable-4.4, stable-4.5
  • alpha, beta, stable, latest

Alternatively, the channel names might follow the version numbers of the application provided by the Operator.

Note

Installed Operators cannot change to a channel that is older than the current channel.

If the approval strategy in the subscription is set to Automatic, the upgrade process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending upgrades.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators.
  2. Click the name of the Operator you want to change the update channel for.
  3. Click the Subscription tab.
  4. Click the name of the update channel under Channel.
  5. Click the newer update channel that you want to change to, then click Save.
  6. For Subscriptions with an Automatic approval strategy, the upgrade begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.

    For Subscriptions with a Manual approval strategy, you can manually approve the upgrade from the Subscription tab.

3.2.2. Manually approving a pending Operator upgrade

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators.
  2. Operators that have a pending upgrade display a status with Upgrade available. Click the name of the Operator you want to upgrade.
  3. Click the Subscription tab. Any upgrades requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for upgrade. When satisfied, click Approve.
  6. Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.

3.3. Deleting Operators from a cluster

The following describes how to delete Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.

3.3.1. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find the Operator you want. Then, click on it.
  2. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.

    An Uninstall Operator? dialog box is displayed, reminding you that:

    Removing the Operator will not remove any of its custom resource definitions or managed resources. If your Operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually.

    The Operator, any Operator deployments, and pods are removed by this action. Any resources managed by the Operator, including CRDs and CRs, are not removed. The web console enables dashboards and navigation items for some Operators. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

  3. Select Uninstall. This Operator stops running and no longer receives updates.

3.3.2. Deleting Operators from a cluster using the CLI

Cluster administrators can delete installed Operators from a selected namespace by using the CLI.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • oc command installed on workstation.

Procedure

  1. Check the current version of the subscribed Operator (for example, jaeger) in the currentCSV field:

    $ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV

    Example output

      currentCSV: jaeger-operator.v1.8.2

  2. Delete the subscription (for example, jaeger):

    $ oc delete subscription jaeger -n openshift-operators

    Example output

    subscription.operators.coreos.com "jaeger" deleted

  3. Delete the CSV for the Operator in the target namespace using the currentCSV value from the previous step:

    $ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators

    Example output

    clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted

3.4. Configuring proxy support in Operator Lifecycle Manager

If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.

Additional resources

3.4.1. Overriding proxy settings of an Operator

If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator.

Important

Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate in the web console to the Operators OperatorHub page.
  2. Select the Operator and click Install.
  3. On the Install Operator page, modify the Subscription object to include one or more of the following environment variables in the spec section:

    • HTTP_PROXY
    • HTTPS_PROXY
    • NO_PROXY

    For example:

    Subscription object with proxy setting overrides

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: etcd-config-test
      namespace: openshift-operators
    spec:
      config:
        env:
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
      channel: clusterwide-alpha
      installPlanApproval: Automatic
      name: etcd
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: etcdoperator.v0.9.4-clusterwide

    Note

    These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.

    OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.

  4. Click Install to make the Operator available to the selected namespaces.
  5. After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:

    $ oc get deployment -n openshift-operators \
        etcd-operator -o yaml \
        | grep -i "PROXY" -A 2

    Example output

            - name: HTTP_PROXY
              value: test_http
            - name: HTTPS_PROXY
              value: test_https
            - name: NO_PROXY
              value: test
            image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c
    ...

3.4.2. Injecting a custom CA certificate

When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • Custom CA certificate added to the cluster using a config map.
  • Desired Operator installed and running on OLM.

Procedure

  1. Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: trusted-ca 1
      labels:
        config.openshift.io/inject-trusted-cabundle: "true" 2
    1
    Name of the config map.
    2
    Requests the Cluster Network Operator to inject the merged bundle.

    After creating this config map, it is immediately populated with the certificate contents of the merged bundle.

  2. Update your the Subscription object to include a spec.config section that mounts the trusted-ca config map as a volume to each container within a pod that requires a custom CA:

    kind: Subscription
    metadata:
      name: my-operator
    spec:
      package: etcd
      channel: alpha
      config: 1
        selector:
          matchLabels:
            <labels_for_pods> 2
        volumes: 3
        - name: trusted-ca
          configMap:
            name: trusted-ca
            items:
              - key: ca-bundle.crt 4
                path: tls-ca-bundle.pem 5
        volumeMounts: 6
        - name: trusted-ca
          mountPath: /etc/pki/ca-trust/extracted/pem
          readOnly: true
    1
    Add a config section if it does not exist.
    2
    Specify labels to match pods that are owned by the Operator.
    3
    Create a trusted-ca volume.
    4
    ca-bundle.crt is required as the config map key.
    5
    tls-ca-bundle.pem is required as the config map path.
    6
    Create a trusted-ca volume mount.

3.5. Viewing Operator status

Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.

3.5.1. Operator subscription condition types

Subscriptions can report the following condition types:

Table 3.1. Subscription condition types
ConditionDescription

CatalogSourcesUnhealthy

Some or all of the catalog sources to be used in resolution are unhealthy.

InstallPlanMissing

An install plan for a subscription is missing.

InstallPlanPending

An install plan for a subscription is pending installation.

InstallPlanFailed

An install plan for a subscription has failed.

Note

Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

3.5.2. Viewing Operator subscription status using the CLI

You can view Operator subscription status using the CLI.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operator subscriptions:

    $ oc get subs -n <operator_namespace>
  2. Use the oc describe command to inspect a Subscription resource:

    $ oc describe sub <subscription_name> -n <operator_namespace>
  3. In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy:

    Example output

    Conditions:
       Last Transition Time:  2019-07-29T13:42:57Z
       Message:               all available catalogsources are healthy
       Reason:                AllCatalogSourcesHealthy
       Status:                False
       Type:                  CatalogSourcesUnhealthy

Note

Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

3.6. Allowing non-cluster administrators to install Operators

Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV) and OLM will consequently grant it to the Operator.

Cluster administrators should take measures to ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM. One method for locking this down requires cluster administrators auditing Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts.

By associating an Operator group with a service account that has a set of privileges granted to it, cluster administrators can set policy on Operators to ensure they operate only within predetermined boundaries using RBAC rules. The Operator is unable to do anything that is not explicitly permitted by those rules.

This self-sufficient, limited scope installation of Operators by non-cluster administrators means that more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators.

3.6.1. Understanding Operator installation policy

Using Operator Lifecycle Manager (OLM), cluster administrators can choose to specify a service account for an Operator group so that all Operators associated with the group are deployed and run against the privileges granted to the service account.

The APIService and CustomResourceDefinition resources are always created by OLM using the cluster-admin role. A service account associated with an Operator group should never be granted privileges to write these resources.

If the specified service account does not have adequate permissions for an Operator that is being installed or upgraded, useful and contextual information is added to the status of the respective resource(s) so that it is easy for the cluster administrator to troubleshoot and resolve the issue.

Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors.

3.6.1.1. Installation scenarios

When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios:

  • A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account.
  • A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted.
  • For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted.
  • A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator.
  • A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator.
  • A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted.

3.6.1.2. Installation workflow

When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow:

  1. The given Subscription object is picked up by OLM.
  2. OLM fetches the Operator group tied to this subscription.
  3. OLM determines that the Operator group has a service account specified.
  4. OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group.
  5. OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account.

3.6.2. Scoping Operator installations

To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group.

Using this example, a cluster administrator can confine a set of Operators to a designated namespace.

Procedure

  1. Create a new namespace:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: scoped
    EOF
  2. Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s).

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: scoped
      namespace: scoped
    EOF

    The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions:

    $ cat <<EOF | oc create -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: scoped
      namespace: scoped
    rules:
    - apiGroups: ["*"]
      resources: ["*"]
      verbs: ["*"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: scoped-bindings
      namespace: scoped
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: scoped
    subjects:
    - kind: ServiceAccount
      name: scoped
      namespace: scoped
    EOF
  3. Create an OperatorGroup object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it.

    In addition, Operator groups allow a user to specify a service account. Specify the service account created in the previous step:

    $ cat <<EOF | oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: scoped
      namespace: scoped
    spec:
      serviceAccountName: scoped
      targetNamespaces:
      - scoped
    EOF

    Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified.

  4. Create a Subscription object in the designated namespace to install an Operator:

    $ cat <<EOF | oc create -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: etcd
      namespace: scoped
    spec:
      channel: singlenamespace-alpha
      name: etcd
      source: <catalog_source_name> 1
      sourceNamespace: <catalog_source_namespace> 2
    EOF
    1
    Specify a catalog source that already exists in the designated namespace or one that is in the global catalog namespace.
    2
    Specify a namespace where the catalog source was created.

    Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors.

3.6.2.1. Fine-grained permissions

Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed:

  • ClusterServiceVersion
  • Subscription
  • Secret
  • ServiceAccount
  • Service
  • ClusterRole and ClusterRoleBinding
  • Role and RoleBinding

In order to confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account:

Note

The following role is a generic example and additional rules might be required based on the specific Operator.

kind: Role
rules:
- apiGroups: ["operators.coreos.com"]
  resources: ["subscriptions", "clusterserviceversions"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["services", "serviceaccounts"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["roles", "rolebindings"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: ["apps"] 1
  resources: ["deployments"]
  verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
- apiGroups: [""] 2
  resources: ["pods"]
  verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
1 2
Add permissions to create other resources, such as deployments and pods shown here.

In addition, if any Operator specifies a pull secret, the following permissions must also be added:

kind: ClusterRole 1
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
kind: Role
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create", "update", "patch"]
1
Required to get the secret from the OLM namespace.

3.6.3. Troubleshooting permission failures

If an Operator installation fails due to lack of permissions, identify the errors using the following procedure.

Procedure

  1. Review the Subscription object. Its status has an object reference installPlanRef that points to the InstallPlan object that attempted to create the necessary [Cluster]Role[Binding] object(s) for the Operator:

    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: etcd
      namespace: scoped
    status:
      installPlanRef:
        apiVersion: operators.coreos.com/v1
        kind: InstallPlan
        name: install-4plp8
        namespace: scoped
        resourceVersion: "117359"
        uid: 2c1df80e-afea-11e9-bce3-5254009c9c23
  2. Check the status of the InstallPlan object for any errors:

    apiVersion: operators.coreos.com/v1
    kind: InstallPlan
    status:
      conditions:
      - lastTransitionTime: "2019-07-26T21:13:10Z"
        lastUpdateTime: "2019-07-26T21:13:10Z"
        message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io
          is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource
          "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope'
        reason: InstallComponentFailed
        status: "False"
        type: Installed
      phase: Failed

    The error message tells you:

    • The type of resource it failed to create, including the API group of the resource. In this case, it was clusterroles in the rbac.authorization.k8s.io group.
    • The name of the resource.
    • The type of error: is forbidden tells you that the user does not have enough permission to do the operation.
    • The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group.
    • The scope of the operation: cluster scope or not.

      The user can add the missing permission to the service account and then iterate.

      Note

      Operator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try.

3.7. Managing custom catalogs

This guide describes how to work with custom catalogs packaged using either the Package Manifest Format or Bundle Format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

3.7.1. Custom catalogs using Package Manifest Format

3.7.1.1. Understanding Operator catalog images

Operator Lifecycle Manager (OLM) always installs Operators from the latest version of an Operator catalog. For OpenShift Container Platform 4.5, Red Hat-provided Operators are distributed via Quay App Registry catalogs from quay.io.

Table 3.2. Red Hat-provided App Registry catalogs
CatalogDescription

redhat-operators

Public catalog for Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

Public catalog for products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

community-operators

Public catalog for software maintained by relevant representatives in the operator-framework/community-operators GitHub repository. No official support.

As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. This behavior can cause problems maintaining reproducible installs over time. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs from quay.io directly.

Using the oc adm catalog build command, cluster administrators can create an Operator catalog image. An Operator catalog image is:

  • a point-in-time export of an App Registry type catalog’s content.
  • the result of converting an App Registry catalog to a container image type catalog.
  • an immutable artifact.

Creating an Operator catalog image provides a simple way to use this content without incurring the aforementioned issues.

3.7.1.2. Building an Operator catalog image

Cluster administrators can build a custom Operator catalog image based on the Package Manifest Format to be used by Operator Lifecycle Manager (OLM). The catalog image can be pushed to a container image registry that supports Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

Important

The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.

For this example, the procedure assumes use of a mirror registry that has access to both your network and the Internet.

Note

Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+ Linux client
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>

    Also authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  2. Build a catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.5 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the ose-operator-registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v1.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1

    Sometimes invalid manifests are accidentally introduced catalogs provided by Red Hat; when this happens, you might see some errors:

    Example output with errors

    ...
    INFO[0014] directory                                     dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package
    W1114 19:42:37.876180   34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found
    Uploading ... 244.9kB/s

    These errors are usually non-fatal, and if the Operator package mentioned does not contain an Operator you plan to install or a dependency of one, then they can be ignored.

3.7.1.3. Mirroring an Operator catalog image

Cluster administrators can mirror their catalog’s content into a registry and use a CatalogSource to load the content onto an OpenShift Container Platform cluster. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • Workstation with unrestricted network access
  • A custom Operator catalog image pushed to a supported registry
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. The oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring. You can choose to either:

    • Allow the default behavior of the command to automatically mirror all of the image content to your mirror registry after generating manifests, or
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to a registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of the content. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v1 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        --filter-by-os='.*' \4
        [--manifests-only] 5
    1
    Specify your Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    This flag is currently required due to a known issue with multiple architecture support.
    5
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.
    Warning

    If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail. For more information, see BZ#1890951.

    Example output

    using database path mapping: /:/tmp/190214037
    wrote database to /tmp/190214037
    using database at: /tmp/190214037/bundles.db 1
    ...

    1
    Temporary database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  2. If you used the --manifests-only flag in the previous step and want to mirror only a subset of the content:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string clusterlogging.4.3:

        $ echo "select * from related_image \
            where operatorbundle_name like 'clusterlogging.4.3%';" \
            | sqlite3 -line /tmp/190214037/bundles.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        image = registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        
        image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        ...

      2. Use the results from the previous step to edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/openshift4-ose-logging-kibana5:a767c8f0
        registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above two lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          --filter-by-os='.*' \
          -f ./redhat-operators-manifests/mapping.txt
      Warning

      If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail.

  3. Apply the ImageContentSourcePolicy object:

    $ oc apply -f ./redhat-operators-manifests/imageContentSourcePolicy.yaml
  4. Create a CatalogSource object that references your catalog image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <registry_host_name>:<port>/olm/redhat-operators:v1 1
        displayName: My Operator Catalog
        publisher: grpc
      1
      Specify your custom Operator catalog image.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  5. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME    CATALOG              AGE
      etcd    My Operator Catalog  34s

You can now install the Operators from the OperatorHub page on your restricted network OpenShift Container Platform cluster web console.

3.7.1.4. Updating an Operator catalog image

After a cluster administrator has configured OperatorHub to use custom Operator catalog images, administrators can keep their OpenShift Container Platform cluster up to date with the latest Operators by capturing updates made to App Registry catalogs provided by Red Hat. This is done by building and pushing a new Operator catalog image, then replacing the existing spec.image parameter in the CatalogSource object with the new image digest.

For this example, the procedure assumes a custom redhat-operators catalog image is already configured for use with OperatorHub.

Note

Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+ Linux client
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • OperatorHub configured to use custom catalog images
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>

    Also authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  2. Build a new catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.5 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the ose-operator-registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v2 because it is the updated catalog.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2

  3. Mirror the contents of your catalog to your target registry. The following oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring and mirrors the images to your registry:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v2 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        --filter-by-os='.*' 4
    1
    Specify your new Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    This flag is currently required due to a known issue with multiple architecture support. If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail. For more information, see BZ#1890951.
  4. Apply the newly generated manifests:

    $ oc apply -f ./redhat-operators-manifests
    Important

    It is possible that you do not need to apply the imageContentSourcePolicy.yaml manifest. Complete a diff of the files to determine if changes are necessary.

  5. Update your CatalogSource object that references your catalog image.

    1. If you have your original catalogsource.yaml file for this CatalogSource object:

      1. Edit your catalogsource.yaml file to reference your new catalog image in the spec.image field:

        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        metadata:
          name: my-operator-catalog
          namespace: openshift-marketplace
        spec:
          sourceType: grpc
          image: <registry_host_name>:<port>/olm/redhat-operators:v2 1
          displayName: My Operator Catalog
          publisher: grpc
        1
        Specify your new Operator catalog image.
      2. Use the updated file to replace the CatalogSource object:

        $ oc replace -f catalogsource.yaml
    2. Alternatively, edit the catalog source using the following command and reference your new catalog image in the spec.image parameter:

      $ oc edit catalogsource <catalog_source_name> -n openshift-marketplace

Updated Operators should now be available from the OperatorHub page on your OpenShift Container Platform cluster.

3.7.1.5. Testing an Operator catalog image

You can validate Operator catalog image content by running it as a container and querying its gRPC API. To further test the image, you can then resolve an OLM subscription by referencing the image in a CatalogSource object. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • A custom Operator catalog image pushed to a supported registry
  • podman version 1.4.4+
  • oc version 4.3.5+
  • Access to mirror registry that supports Docker v2-2
  • grpcurl

Procedure

  1. Pull the Operator catalog image:

    $ podman pull <registry_host_name>:<port>/olm/redhat-operators:v1
  2. Run the image:

    $ podman run -p 50051:50051 \
        -it <registry_host_name>:<port>/olm/redhat-operators:v1
  3. Query the running image for available packages using grpcurl:

    $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages

    Example output

    {
      "name": "3scale-operator"
    }
    {
      "name": "amq-broker"
    }
    {
      "name": "amq-online"
    }

  4. Get the latest Operator bundle in a channel:

    $  grpcurl -plaintext -d '{"pkgName":"kiali-ossm","channelName":"stable"}' localhost:50051 api.Registry/GetBundleForChannel

    Example output

    {
      "csvName": "kiali-operator.v1.0.7",
      "packageName": "kiali-ossm",
      "channelName": "stable",
    ...

  5. Get the digest of the image:

    $ podman inspect \
        --format='{{index .RepoDigests 0}}' \
        <registry_host_name>:<port>/olm/redhat-operators:v1

    Example output

    example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3

  6. Assuming an Operator group exists in namespace my-ns that supports your Operator and its dependencies, create a CatalogSource object using the image digest. For example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: custom-redhat-operators
      namespace: my-ns
    spec:
      sourceType: grpc
      image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3
      displayName: Red Hat Operators
  7. Create a subscription that resolves the latest available servicemeshoperator and its dependencies from your catalog image:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: servicemeshoperator
      namespace: my-ns
    spec:
      source: custom-redhat-operators
      sourceNamespace: my-ns
      name: servicemeshoperator
      channel: "1.0"

3.7.2. Custom catalogs using Bundle Format

3.7.2.1. opm CLI

The new opm CLI tool is introduced alongside the new Bundle Format. This tool allows you to create and maintain catalogs of Operators from a list of bundles, called an index, that are equivalent to a "repository". The result is a container image, called an index image, which can be stored in a container registry and then installed on a cluster.

An index contains a database of pointers to Operator manifest content that can be queried via an included API that is served when the container image is run. On OpenShift Container Platform, OLM can use the index image as a catalog by referencing it in a CatalogSource, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster.

Additional resources

3.7.2.2. Installing opm

You can install the opm CLI tool on your workstation.

Prerequisites

  • podman version 1.4.4+

Procedure

  1. Set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  2. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  3. Extract the opm binary from the Operator Registry image and copy it to your local file system:

    $ oc image extract registry.redhat.io/openshift4/ose-operator-registry:v4.5 \
        -a ${REG_CREDS} \1
        --path /usr/bin/opm:. \
        --confirm
    1
    Specify the location of your registry credentials file.
  4. Make the binary executable:

    $ chmod +x ./opm
  5. Place the file anywhere in your PATH, such as /usr/local/bin/.

    $ sudo mv ./opm /usr/local/bin/
  6. Verify that the client was installed correctly:

    $ opm version

    Example output

    Version: version.Version{OpmVersion:"1.12.3", GitCommit:"", BuildDate:"2020-07-01T23:18:58Z", GoOs:"linux", GoArch:"amd64"}

3.7.2.3. Creating an index image

You can create an index image using the opm CLI.

Prerequisites

  • opm version 1.12.3+
  • podman version 1.4.4+
  • A bundle image built and pushed to a registry.

Procedure

  1. Start a new index:

    $ opm index add \
        --bundles quay.io/<namespace>/test-operator:v0.1.0 \1
        --tag quay.io/<namespace>/test-catalog:latest \2
        [--binary-image <registry_base_image>] 3
    1
    Comma-separated list of bundle images to add to the index.
    2
    The image tag that you want the index image to have.
    3
    Optional: An alternative registry base image to use for serving the catalog.
  2. Push the index image to a registry:

    $ podman push quay.io/<namespace>/test-catalog:latest

3.7.2.4. Creating a catalog from an index image

You can create a catalog from an index image and apply it to an OpenShift Container Platform cluster.

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Apply a CatalogSource object to your cluster that references your index image:

    $ cat <<EOF | oc apply -f -
    
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: test-catalog
      namespace: openshift-marketplace
    spec:
      sourceType: grpc
      image: quay.io/<namespace>/test-catalog:latest 1
      displayName: Test Catalog
      updateStrategy:
        registryPoll: 2
          interval: 30m
    EOF
    1
    Specify your index image.
    2
    Catalog sources can automatically check for new versions to keep up to date.
  2. Verify using the OpenShift Container Platform web console or CLI that the catalog loaded successfully and that packages are available. For example, using the CLI:

    1. Check the pods:

      $ oc get pods -n openshift-marketplace
    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace
    3. Check the package manifest:

      $ oc get packagemanifests -n openshift-marketplace

3.7.2.5. Updating an index image

You can update an existing index image using the opm CLI.

Prerequisites

  • opm version 1.12.3+
  • podman version 1.4.4+
  • An index image built and pushed to a registry.
  • A CatalogSource object created and applied to a cluster.

Procedure

  1. Update the existing index:

    $ opm index add \
        --bundles quay.io/<namespace>/another-operator:v1 \1
        --from-index quay.io/<namespace>/test-catalog:latest \2
        --tag quay.io/<namespace>/test-catalog:latest 3
    1
    The additional bundle image to add to the index.
    2
    The existing index that was previously pushed.
    3
    The image tag that you want the updated index image to have.
  2. Push the updated index image:

    $ podman push quay.io/<namespace>/test-catalog:latest
  3. After Operator Lifecycle Manager (OLM) polls the index image at its regular interval, verify that the new packages are successfully added:

    $ oc get packagemanifests -n openshift-marketplace

3.8. Using Operator Lifecycle Manager on restricted networks

For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted remotely on Quay.io because those remote sources require full Internet connectivity.

However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full Internet access. The workstation is used to prepare local mirrors of the remote OperatorHub sources, and requires full Internet access to pull the remote content.

This guide describes the following process that is required to enable OLM in restricted networks:

  • Disable the default remote OperatorHub sources for OLM.
  • Use a workstation with full Internet access to create local mirrors of the OperatorHub content.
  • Configure OLM to install and manage Operators from the local sources instead of the default remote sources.

After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.

Important

While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself. The Operator must:

  • List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object.
  • Reference all specified images by a digest (SHA) and not by a tag.

See the following Red Hat Knowledgebase Article for a list of Red Hat Operators that support running in disconnected mode:

https://access.redhat.com/articles/4740011

3.8.1. Understanding Operator catalog images

Operator Lifecycle Manager (OLM) always installs Operators from the latest version of an Operator catalog. For OpenShift Container Platform 4.5, Red Hat-provided Operators are distributed via Quay App Registry catalogs from quay.io.

Table 3.3. Red Hat-provided App Registry catalogs
CatalogDescription

redhat-operators

Public catalog for Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

Public catalog for products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

community-operators

Public catalog for software maintained by relevant representatives in the operator-framework/community-operators GitHub repository. No official support.

As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. This behavior can cause problems maintaining reproducible installs over time. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs from quay.io directly.

Using the oc adm catalog build command, cluster administrators can create an Operator catalog image. An Operator catalog image is:

  • a point-in-time export of an App Registry type catalog’s content.
  • the result of converting an App Registry catalog to a container image type catalog.
  • an immutable artifact.

Creating an Operator catalog image provides a simple way to use this content without incurring the aforementioned issues.

3.8.2. Building an Operator catalog image

Cluster administrators can build a custom Operator catalog image based on the Package Manifest Format to be used by Operator Lifecycle Manager (OLM). The catalog image can be pushed to a container image registry that supports Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

Important

The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.

For this example, the procedure assumes use of a mirror registry that has access to both your network and the Internet.

Note

Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+ Linux client
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>

    Also authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  2. Build a catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.5 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the ose-operator-registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v1.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1

    Sometimes invalid manifests are accidentally introduced catalogs provided by Red Hat; when this happens, you might see some errors:

    Example output with errors

    ...
    INFO[0014] directory                                     dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package
    W1114 19:42:37.876180   34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found
    Uploading ... 244.9kB/s

    These errors are usually non-fatal, and if the Operator package mentioned does not contain an Operator you plan to install or a dependency of one, then they can be ignored.

3.8.3. Configuring OperatorHub for restricted networks

Cluster administrators can configure OLM and OperatorHub to use local content in a restricted network environment using a custom Operator catalog image. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • Workstation with unrestricted network access
  • A custom Operator catalog image pushed to a supported registry
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. Disable the default OperatorSource objects by adding disableAllDefaultSources: true to the spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

    This disables the default sources that are configured by default during an OpenShift Container Platform installation.

  2. The oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring. You can choose to either:

    • Allow the default behavior of the command to automatically mirror all of the image content to your mirror registry after generating manifests, or
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to a registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of the content. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v1 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        --filter-by-os='.*' \4
        [--manifests-only] 5
    1
    Specify your Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    This flag is currently required due to a known issue with multiple architecture support.
    5
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.
    Warning

    If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail. For more information, see BZ#1890951.

    Example output

    using database path mapping: /:/tmp/190214037
    wrote database to /tmp/190214037
    using database at: /tmp/190214037/bundles.db 1
    ...

    1
    Temporary database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  3. If you used the --manifests-only flag in the previous step and want to mirror only a subset of the content:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string clusterlogging.4.3:

        $ echo "select * from related_image \
            where operatorbundle_name like 'clusterlogging.4.3%';" \
            | sqlite3 -line /tmp/190214037/bundles.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        image = registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        
        image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        ...

      2. Use the results from the previous step to edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/openshift4-ose-logging-kibana5:a767c8f0
        registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above two lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          --filter-by-os='.*' \
          -f ./redhat-operators-manifests/mapping.txt
      Warning

      If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail.

  4. Apply the ImageContentSourcePolicy object:

    $ oc apply -f ./redhat-operators-manifests/imageContentSourcePolicy.yaml
  5. Create a CatalogSource object that references your catalog image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <registry_host_name>:<port>/olm/redhat-operators:v1 1
        displayName: My Operator Catalog
        publisher: grpc
      1
      Specify your custom Operator catalog image.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  6. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME    CATALOG              AGE
      etcd    My Operator Catalog  34s

You can now install the Operators from the OperatorHub page on your restricted network OpenShift Container Platform cluster web console.

3.8.4. Updating an Operator catalog image

After a cluster administrator has configured OperatorHub to use custom Operator catalog images, administrators can keep their OpenShift Container Platform cluster up to date with the latest Operators by capturing updates made to App Registry catalogs provided by Red Hat. This is done by building and pushing a new Operator catalog image, then replacing the existing spec.image parameter in the CatalogSource object with the new image digest.

For this example, the procedure assumes a custom redhat-operators catalog image is already configured for use with OperatorHub.

Note

Only the Linux version of the oc client can be used for this procedure, because the Windows and macOS versions do not provide the oc adm catalog build command.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+ Linux client
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • OperatorHub configured to use custom catalog images
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>

    Also authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  2. Build a new catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.5 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the ose-operator-registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v2 because it is the updated catalog.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2

  3. Mirror the contents of your catalog to your target registry. The following oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring and mirrors the images to your registry:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v2 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        --filter-by-os='.*' 4
    1
    Specify your new Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    This flag is currently required due to a known issue with multiple architecture support. If the --filter-by-os flag remains unset or set to any value other than .*, the command filters out different architectures, which changes the digest of the manifest list, also known as a multi-arch image. The incorrect digest causes deployments of those images and Operators on disconnected clusters to fail. For more information, see BZ#1890951.
  4. Apply the newly generated manifests:

    $ oc apply -f ./redhat-operators-manifests
    Important

    It is possible that you do not need to apply the imageContentSourcePolicy.yaml manifest. Complete a diff of the files to determine if changes are necessary.

  5. Update your CatalogSource object that references your catalog image.

    1. If you have your original catalogsource.yaml file for this CatalogSource object:

      1. Edit your catalogsource.yaml file to reference your new catalog image in the spec.image field:

        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        metadata:
          name: my-operator-catalog
          namespace: openshift-marketplace
        spec:
          sourceType: grpc
          image: <registry_host_name>:<port>/olm/redhat-operators:v2 1
          displayName: My Operator Catalog
          publisher: grpc
        1
        Specify your new Operator catalog image.
      2. Use the updated file to replace the CatalogSource object:

        $ oc replace -f catalogsource.yaml
    2. Alternatively, edit the catalog source using the following command and reference your new catalog image in the spec.image parameter:

      $ oc edit catalogsource <catalog_source_name> -n openshift-marketplace

Updated Operators should now be available from the OperatorHub page on your OpenShift Container Platform cluster.

3.8.5. Testing an Operator catalog image

You can validate Operator catalog image content by running it as a container and querying its gRPC API. To further test the image, you can then resolve an OLM subscription by referencing the image in a CatalogSource object. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • A custom Operator catalog image pushed to a supported registry
  • podman version 1.4.4+
  • oc version 4.3.5+
  • Access to mirror registry that supports Docker v2-2
  • grpcurl

Procedure

  1. Pull the Operator catalog image:

    $ podman pull <registry_host_name>:<port>/olm/redhat-operators:v1
  2. Run the image:

    $ podman run -p 50051:50051 \
        -it <registry_host_name>:<port>/olm/redhat-operators:v1
  3. Query the running image for available packages using grpcurl:

    $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages

    Example output

    {
      "name": "3scale-operator"
    }
    {
      "name": "amq-broker"
    }
    {
      "name": "amq-online"
    }

  4. Get the latest Operator bundle in a channel:

    $  grpcurl -plaintext -d '{"pkgName":"kiali-ossm","channelName":"stable"}' localhost:50051 api.Registry/GetBundleForChannel

    Example output

    {
      "csvName": "kiali-operator.v1.0.7",
      "packageName": "kiali-ossm",
      "channelName": "stable",
    ...

  5. Get the digest of the image:

    $ podman inspect \
        --format='{{index .RepoDigests 0}}' \
        <registry_host_name>:<port>/olm/redhat-operators:v1

    Example output

    example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3

  6. Assuming an Operator group exists in namespace my-ns that supports your Operator and its dependencies, create a CatalogSource object using the image digest. For example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: custom-redhat-operators
      namespace: my-ns
    spec:
      sourceType: grpc
      image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3
      displayName: Red Hat Operators
  7. Create a subscription that resolves the latest available servicemeshoperator and its dependencies from your catalog image:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: servicemeshoperator
      namespace: my-ns
    spec:
      source: custom-redhat-operators
      sourceNamespace: my-ns
      name: servicemeshoperator
      channel: "1.0"
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.