Operators


Red Hat OpenShift Service on AWS 4

Red Hat OpenShift Service on AWS Operators.

Red Hat OpenShift Documentation Team

Abstract

How Operators help in packaging, deploying, and managing services on the control plane.

Chapter 1. Operators overview

Operators are among the most important components of Red Hat OpenShift Service on AWS. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run.

Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI (oc). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state.

Operators are designed specifically for Kubernetes-native applications to implement and automate common Day 1 operations, such as installation and configuration. Operators can also automate Day 2 operations, such as autoscaling up or down and creating backups. All of these activities are directed by a piece of software running on your cluster.

While both follow similar Operator concepts and goals, Operators in Red Hat OpenShift Service on AWS are managed by two different systems, depending on their purpose:

Cluster Operators
Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions.
Optional add-on Operators
Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators.

1.1. For developers

As an Operator author, you can perform the following development tasks for OLM-based Operators:

1.2. For administrators

As an administrator with the dedicated-admin role, you can perform the following Operator tasks:

1.3. Next steps

To understand more about Operators, see What are Operators?

Chapter 2. Understanding Operators

2.1. What are Operators?

Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers.

Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor’s engineering team, monitoring a Kubernetes environment (such as Red Hat OpenShift Service on AWS) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.

More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application.

A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes.

2.1.1. Why use Operators?

Operators provide:

  • Repeatability of installation and upgrade.
  • Constant health checks of every system component.
  • Over-the-air (OTA) updates for OpenShift components and ISV content.
  • A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two.
Why deploy on Kubernetes?
Kubernetes (and by extension, Red Hat OpenShift Service on AWS) contains all of the primitives needed to build complex distributed systems – secret handling, load balancing, service discovery, autoscaling – that work across on-premises and cloud providers.
Why manage your app with Kubernetes APIs and kubectl tooling?
These APIs are feature rich, have clients for all platforms and plug into the cluster’s access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB, looks and acts just like the built-in, native Kubernetes objects.
How do Operators compare with service brokers?
A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well.

2.1.2. Operator Framework

The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems:

Operator SDK
The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities.
Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in Red Hat OpenShift Service on AWS 4.
Operator Registry
The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM.
OperatorHub
OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in Red Hat OpenShift Service on AWS.

These tools are designed to be composable, so you can use any that are useful to you.

2.1.3. Operator maturity model

The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator.

One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator:

Figure 2.1. Operator maturity model

operator maturity model

The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK.

2.2. Operator Framework packaging format

This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.2.1. Bundle format

The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata.

An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image, which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay.

Operator metadata can include:

  • Information that identifies the Operator, for example its name and version.
  • Additional information that drives the UI, for example its icon and some example custom resources (CRs).
  • Required and provided APIs.
  • Related images.

When loading manifests into the Operator Registry database, the following requirements are validated:

  • The bundle must have at least one channel defined in the annotations.
  • Every bundle has exactly one cluster service version (CSV).
  • If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle.
2.2.1.1. Manifests

Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator.

A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory.

Example bundle format layout

etcd
├── manifests
│   ├── etcdcluster.crd.yaml
│   └── etcdoperator.clusterserviceversion.yaml
│   └── secret.yaml
│   └── configmap.yaml
└── metadata
    └── annotations.yaml
    └── dependencies.yaml

Additionally supported objects

The following object types can also be optionally included in the /manifests directory of a bundle:

Supported optional object types

  • ClusterRole
  • ClusterRoleBinding
  • ConfigMap
  • ConsoleCLIDownload
  • ConsoleLink
  • ConsoleQuickStart
  • ConsoleYamlSample
  • PodDisruptionBudget
  • PriorityClass
  • PrometheusRule
  • Role
  • RoleBinding
  • Secret
  • Service
  • ServiceAccount
  • ServiceMonitor
  • VerticalPodAutoscaler

When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV:

Lifecycle for optional objects

  • When the CSV is deleted, OLM deletes the optional object.
  • When the CSV is upgraded:

    • If the name of the optional object is the same, OLM updates it in place.
    • If the name of the optional object has changed between versions, OLM deletes and recreates it.
2.2.1.2. Annotations

A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles:

Example annotations.yaml

annotations:
  operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1
  operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2
  operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3
  operators.operatorframework.io.bundle.package.v1: "test-operator" 4
  operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5
  operators.operatorframework.io.bundle.channel.default.v1: "stable" 6

1
The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects.
2
The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/. The value manifests.v1 implies that the bundle contains Operator manifests.
3
The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/. The value metadata.v1 implies that this bundle has Operator metadata.
4
The package name of the bundle.
5
The list of channels the bundle is subscribing to when added into an Operator Registry.
6
The default channel an Operator should be subscribed to when installed from a registry.
Note

In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file.

2.2.1.3. Dependencies

The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.

The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported:

olm.package
This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1.
olm.gvk
With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.
olm.constraint
This type declares generic constraints on arbitrary Operator properties.

In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:

Example dependencies.yaml file

dependencies:
  - type: olm.package
    value:
      packageName: prometheus
      version: ">0.27.0"
  - type: olm.gvk
    value:
      group: etcd.database.coreos.com
      kind: EtcdCluster
      version: v1beta2

2.2.1.4. About the opm CLI

The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster.

A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On Red Hat OpenShift Service on AWS, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster.

  • See CLI tools for steps on installing the opm CLI.

2.2.2. Highlights

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility.

Editing

With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI.

This editability enables the following features and user-defined extensions:

  • Promoting an existing bundle to a new channel
  • Changing the default channel of a package
  • Custom algorithms for adding, updating, and removing upgrade edges
Composability

File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB. A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it.

This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these.

Note

Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found.

Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users.

Extensibility

The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations.

For example, a tool could translate a high-level API, such as (mode=semver), down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria.

While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future Red Hat OpenShift Service on AWS releases, the major benefit is that catalog maintainers have this capability as well.

Important

As of Red Hat OpenShift Service on AWS 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for Red Hat OpenShift Service on AWS 4.6 through 4.10 released in the deprecated SQLite database format.

The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.

Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs.

2.2.2.1. Directory structure

File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur.

Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files.

Example .indexignore file

# Ignore everything except non-object .json and .yaml files
**/*
!*.json
!*.yaml
**/objects/*.json
**/objects/*.yaml

Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package’s file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format.

Basic recommended structure

catalog
├── packageA
│   └── index.yaml
├── packageB
│   ├── .indexignore
│   ├── index.yaml
│   └── objects
│       └── packageB.v0.1.0.clusterserviceversion.yaml
└── packageC
    └── index.json
    └── deprecations.yaml

This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog’s root directory.

2.2.2.2. Schemas

File-based catalogs use a format, based on the CUE language specification, that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to:

_Meta schema

_Meta: {
  // schema is required and must be a non-empty string
  schema: string & !=""

  // package is optional, but if it's defined, it must be a non-empty string
  package?: string & !=""

  // properties is optional, but if it's defined, it must be a list of 0 or more properties
  properties?: [... #Property]
}

#Property: {
  // type is required
  type: string & !=""

  // value is required, and it must not be null
  value: !=null
}

Note

No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE.

An Operator Lifecycle Manager (OLM) catalog currently uses three schemas (olm.package, olm.channel, and olm.bundle), which correspond to OLM’s existing package and bundle concepts.

Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs.

Note

All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own.

2.2.2.2.1. olm.package schema

The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon.

Example 2.1. olm.package schema

#Package: {
  schema: "olm.package"

  // Package name
  name: string & !=""

  // A description of the package
  description?: string

  // The package's default channel
  defaultChannel: string & !=""

  // An optional icon
  icon?: {
    base64data: string
    mediatype:  string
  }
}
2.2.2.2.2. olm.channel schema

The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles.

If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel.

It is valid for an entry’s replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads.

Example 2.2. olm.channel schema

#Channel: {
  schema: "olm.channel"
  package: string & !=""
  name: string & !=""
  entries: [...#ChannelEntry]
}

#ChannelEntry: {
  // name is required. It is the name of an `olm.bundle` that
  // is present in the channel.
  name: string & !=""

  // replaces is optional. It is the name of bundle that is replaced
  // by this entry. It does not have to be present in the entry list.
  replaces?: string & !=""

  // skips is optional. It is a list of bundle names that are skipped by
  // this entry. The skipped bundles do not have to be present in the
  // entry list.
  skips?: [...string & !=""]

  // skipRange is optional. It is the semver range of bundle versions
  // that are skipped by this entry.
  skipRange?: string & !=""
}
Warning

When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects.

You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate previous version of the Operator version in question.

2.2.2.2.3. olm.bundle schema

Example 2.3. olm.bundle schema

#Bundle: {
  schema: "olm.bundle"
  package: string & !=""
  name: string & !=""
  image: string & !=""
  properties: [...#Property]
  relatedImages?: [...#RelatedImage]
}

#Property: {
  // type is required
  type: string & !=""

  // value is required, and it must not be null
  value: !=null
}

#RelatedImage: {
  // image is the image reference
  image: string & !=""

  // name is an optional descriptive name for an image that
  // helps identify its purpose in the context of the bundle
  name?: string & !=""
}
2.2.2.2.4. olm.deprecations schema

The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog.

When this schema is defined, the Red Hat OpenShift Service on AWS web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub.

An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object.

Table 2.1. Deprecation reference types
TypeScopeStatus condition

olm.package

Represents the entire package

PackageDeprecated

olm.channel

Represents one channel

ChannelDeprecated

olm.bundle

Represents one bundle version

BundleDeprecated

Each reference type has their own requirements, as detailed in the following example.

Example 2.4. Example olm.deprecations schema with each reference type

schema: olm.deprecations
package: my-operator 1
entries:
  - reference:
      schema: olm.package 2
    message: | 3
    The 'my-operator' package is end of life. Please use the
    'my-operator-new' package for support.
  - reference:
      schema: olm.channel
      name: alpha 4
    message: |
    The 'alpha' channel is no longer supported. Please switch to the
    'stable' channel.
  - reference:
      schema: olm.bundle
      name: my-operator.v1.68.0 5
    message: |
    my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and
    install my-operator.v1.72.0 for support.
1
Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field.
2
The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema.
3
All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob.
4
The name field for the olm.channel schema is required.
5
The name field for the olm.bundle schema is required.
Note

The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle.

Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package’s index.yaml file:

Example directory structure for a catalog with deprecations

my-catalog
└── my-operator
    ├── index.yaml
    └── deprecations.yaml

2.2.2.3. Properties

Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML.

OLM defines a handful of property types, again using the reserved olm.* prefix.

2.2.2.3.1. olm.package property

The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle’s first-class package field, and the version field must be a valid semantic version.

Example 2.5. olm.package property

#PropertyPackage: {
  type: "olm.package"
  value: {
    packageName: string & !=""
    version: string & !=""
  }
}
2.2.2.3.2. olm.gvk property

The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations.

Example 2.6. olm.gvk property

#PropertyGVK: {
  type: "olm.gvk"
  value: {
    group: string & !=""
    version: string & !=""
    kind: string & !=""
  }
}
2.2.2.3.3. olm.package.required

The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range.

Example 2.7. olm.package.required property

#PropertyPackageRequired: {
  type: "olm.package.required"
  value: {
    packageName: string & !=""
    versionRange: string & !=""
  }
}
2.2.2.3.4. olm.gvk.required

The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations.

Example 2.8. olm.gvk.required property

#PropertyGVKRequired: {
  type: "olm.gvk.required"
  value: {
    group: string & !=""
    version: string & !=""
    kind: string & !=""
  }
}
2.2.2.4. Example catalog

With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog’s root directory.

There are many possible ways to build a file-based catalog; the following steps outline a simple approach:

  1. Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog:

    Example catalog configuration file

    name: community-operators
    repo: quay.io/community-operators/catalog
    tag: latest
    references:
    - name: etcd-operator
      image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03
    - name: prometheus-operator
      image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317

  2. Run a script that parses the configuration file and creates a new catalog from its references:

    Example script

    name=$(yq eval '.name' catalog.yaml)
    mkdir "$name"
    yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir
    for l in $(yq e '.name as $catalog | .references[] | .image + "|" + $catalog + "/" + .name + "/index.yaml"' catalog.yaml); do
      image=$(echo $l | cut -d'|' -f1)
      file=$(echo $l | cut -d'|' -f2)
      opm render "$image" > "$file"
    done
    opm generate dockerfile "$name"
    indexImage=$(yq eval '.repo + ":" + .tag' catalog.yaml)
    docker build -t "$indexImage" -f "$name.Dockerfile" .
    docker push "$indexImage"

2.2.2.5. Guidelines

Consider the following guidelines when maintaining file-based catalogs.

2.2.2.5.1. Immutable bundles

The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable.

If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog.

However, there are some cases where a change in the catalog metadata is preferred:

  • Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob.
  • New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4, but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4.
2.2.2.5.2. Source control

Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps:

  1. Update the source-controlled catalog directory with a new commit.
  2. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version>, so that users can receive updates to a catalog as they become available.
2.2.2.6. CLI usage

For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs.

For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools.

2.2.2.7. Automation

Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks:

  • Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package’s image reference.
  • Check that the catalog updates pass the opm validate command.
  • Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed.
  • Automatically merge PRs that pass the previous checks.
  • Automatically rebuild and republish the catalog image.

2.3. Operator Framework glossary of common terms

This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK.

2.3.1. Bundle

In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster.

2.3.2. Bundle image

In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub.

2.3.3. Catalog source

A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies.

2.3.4. Channel

A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest.

An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel.

2.3.5. Channel head

A channel head refers to the latest known update in a particular channel.

2.3.6. Cluster service version

A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version.

It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.

2.3.7. Dependency

An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer.

OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles.

2.3.8. Index image

In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool.

2.3.9. Install plan

An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV.

2.3.10. Multitenancy

A tenant in Red Hat OpenShift Service on AWS is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams.

When a cluster is shared by multiple users or groups, it is considered a multitenant cluster.

2.3.11. Operator group

An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide.

2.3.12. Package

In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs.

2.3.13. Registry

A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels.

2.3.14. Subscription

A subscription keeps CSVs up to date by tracking a channel in a package.

2.3.15. Update graph

An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added.

2.4. Operator Lifecycle Manager (OLM)

2.4.1. Operator Lifecycle Manager concepts and resources

This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.4.1.1. What is Operator Lifecycle Manager?

Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their Red Hat OpenShift Service on AWS clusters. It is part of the Operator Framework, an open source toolkit designed to manage Operators in an effective, automated, and scalable way.

Figure 2.2. Operator Lifecycle Manager workflow

olm workflow

OLM runs by default in Red Hat OpenShift Service on AWS 4, which aids administrators with the dedicated-admin role in installing, upgrading, and granting access to Operators running on their cluster. The Red Hat OpenShift Service on AWS web console provides management screens for dedicated-admin administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.

2.4.1.2. OLM resources

The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM):

Table 2.2. CRDs managed by OLM and Catalog Operators
ResourceShort nameDescription

ClusterServiceVersion (CSV)

csv

Application metadata. For example: name, version, icon, required resources.

CatalogSource

catsrc

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Keeps CSVs up to date by tracking a channel in a package.

InstallPlan

ip

Calculated list of resources to be created to automatically install or upgrade a CSV.

OperatorGroup

og

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

OperatorConditions

-

Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM.

2.4.1.2.1. Cluster service version

A cluster service version (CSV) represents a specific version of a running Operator on an Red Hat OpenShift Service on AWS cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster.

OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm, deb, or apk bundle.

A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo.

A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment.

2.4.1.2.2. Catalog source

A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the Red Hat OpenShift Service on AWS web console also displays the Operators provided by catalog sources.

Tip

Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the AdministrationCluster SettingsConfigurationOperatorHub page in the web console.

The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API.

Example 2.9. Example CatalogSource object

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  generation: 1
  name: example-catalog 1
  namespace: openshift-marketplace 2
  annotations:
    olm.catalogImageTemplate: 3
      "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}"
spec:
  displayName: Example Catalog 4
  image: quay.io/example-org/example-catalog:v1 5
  priority: -400 6
  publisher: Example Org
  sourceType: grpc 7
  grpcPodConfig:
    securityContextConfig: <security_mode> 8
    nodeSelector: 9
      custom_label: <label>
    priorityClassName: system-cluster-critical 10
    tolerations: 11
      - key: "key1"
        operator: "Equal"
        value: "value1"
        effect: "NoSchedule"
  updateStrategy:
    registryPoll: 12
      interval: 30m0s
status:
  connectionState:
    address: example-catalog.openshift-marketplace.svc:50051
    lastConnect: 2021-08-26T18:14:31Z
    lastObservedState: READY 13
  latestImageRegistryPoll: 2021-08-26T18:46:25Z 14
  registryService: 15
    createdAt: 2021-08-26T16:16:37Z
    port: 50051
    protocol: grpc
    serviceName: example-catalog
    serviceNamespace: openshift-marketplace
1
Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace.
2
Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace. The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace.
3
Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog’s index image version as part of cluster upgrades.

Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details.

4
Display name for the catalog in the web console and CLI.
5
Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time.
6
Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs.
7
Source types include the following:
  • grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API.
  • grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases.
  • configmap: OLM parses config map data and runs a pod that can serve the gRPC API over it.
8
Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future Red Hat OpenShift Service on AWS release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
9
Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image, if defined.
10
Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image, if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ("") assigns the pod the default priority. Other priority classes can be defined manually.
11
Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image, if defined.
12
Automatically check for new versions at a given interval to stay up-to-date.
13
Last observed state of the catalog connection. For example:
  • READY: A connection is successfully established.
  • CONNECTING: A connection is attempting to establish.
  • TRANSIENT_FAILURE: A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again.

See States of Connectivity in the gRPC documentation for more details.

14
Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date.
15
Status information for the catalog’s Operator Registry service.

Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator:

Example 2.10. Example Subscription object referencing a catalog source

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: example-operator
  namespace: example-namespace
spec:
  channel: stable
  name: example-operator
  source: example-catalog
  sourceNamespace: openshift-marketplace
2.4.1.2.2.1. Image template for custom catalog sources

Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example Red Hat OpenShift Service on AWS 4.

During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from Red Hat OpenShift Service on AWS 4.16 to 4.17, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:

registry.redhat.io/redhat/redhat-operator-index:v4.16

to:

registry.redhat.io/redhat/redhat-operator-index:v4.17

However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image.

Starting in Red Hat OpenShift Service on AWS 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template:

  • kube_major_version
  • kube_minor_version
  • kube_patch_version
Note

You must specify the Kubernetes cluster version and not an Red Hat OpenShift Service on AWS cluster version, as the latter is not currently available for templating.

Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path.

Important

You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade.

Example 2.11. Example catalog source with an image template

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  generation: 1
  name: example-catalog
  namespace: openshift-marketplace
  annotations:
    olm.catalogImageTemplate:
      "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}"
spec:
  displayName: Example Catalog
  image: quay.io/example-org/example-catalog:v1.30
  priority: -400
  publisher: Example Org
Note

If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value.

If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition.

For an Red Hat OpenShift Service on AWS 4 cluster, which uses Kubernetes 1.30, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference:

quay.io/example-org/example-catalog:v1.30

For future releases of Red Hat OpenShift Service on AWS, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later Red Hat OpenShift Service on AWS version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later Red Hat OpenShift Service on AWS version would then automatically update the catalog’s index image as well.

2.4.1.2.2.2. Catalog health requirements

Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster.

For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A.

As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator.

As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog.

2.4.1.2.3. Subscription

A subscription, defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source.

Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster.

Example Subscription object

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: example-operator
  namespace: example-namespace
spec:
  channel: stable
  name: example-operator
  source: example-catalog
  sourceNamespace: openshift-marketplace

This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha, beta, or stable, helps determine which Operator stream should be installed from the catalog source.

The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).

In addition to being easily visible from the Red Hat OpenShift Service on AWS web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster.

2.4.1.2.4. Install plan

An install plan, defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV).

To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator.

The install plan must then be approved according to one of the following approval strategies:

  • If the subscription’s spec.installPlanApproval field is set to Automatic, the install plan is approved automatically.
  • If the subscription’s spec.installPlanApproval field is set to Manual, the install plan must be manually approved by a cluster administrator or user with proper permissions.

After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription.

Example 2.12. Example InstallPlan object

apiVersion: operators.coreos.com/v1alpha1
kind: InstallPlan
metadata:
  name: install-abcde
  namespace: operators
spec:
  approval: Automatic
  approved: true
  clusterServiceVersionNames:
    - my-operator.v1.0.1
  generation: 1
status:
  ...
  catalogSources: []
  conditions:
    - lastTransitionTime: '2021-01-01T20:17:27Z'
      lastUpdateTime: '2021-01-01T20:17:27Z'
      status: 'True'
      type: Installed
  phase: Complete
  plan:
    - resolving: my-operator.v1.0.1
      resource:
        group: operators.coreos.com
        kind: ClusterServiceVersion
        manifest: >-
        ...
        name: my-operator.v1.0.1
        sourceName: redhat-operators
        sourceNamespace: openshift-marketplace
        version: v1alpha1
      status: Created
    - resolving: my-operator.v1.0.1
      resource:
        group: apiextensions.k8s.io
        kind: CustomResourceDefinition
        manifest: >-
        ...
        name: webservers.web.servers.org
        sourceName: redhat-operators
        sourceNamespace: openshift-marketplace
        version: v1beta1
      status: Created
    - resolving: my-operator.v1.0.1
      resource:
        group: ''
        kind: ServiceAccount
        manifest: >-
        ...
        name: my-operator
        sourceName: redhat-operators
        sourceNamespace: openshift-marketplace
        version: v1
      status: Created
    - resolving: my-operator.v1.0.1
      resource:
        group: rbac.authorization.k8s.io
        kind: Role
        manifest: >-
        ...
        name: my-operator.v1.0.1-my-operator-6d7cbc6f57
        sourceName: redhat-operators
        sourceNamespace: openshift-marketplace
        version: v1
      status: Created
    - resolving: my-operator.v1.0.1
      resource:
        group: rbac.authorization.k8s.io
        kind: RoleBinding
        manifest: >-
        ...
        name: my-operator.v1.0.1-my-operator-6d7cbc6f57
        sourceName: redhat-operators
        sourceNamespace: openshift-marketplace
        version: v1
      status: Created
      ...
2.4.1.2.5. Operator groups

An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.

The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments.

Additional resources

2.4.1.2.6. Operator conditions

As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator.

OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource.

Note

By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic.

Additional resources

2.4.2. Operator Lifecycle Manager architecture

This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.4.2.1. Component responsibilities

Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator.

Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework:

Table 2.3. CRDs managed by OLM and Catalog Operators
ResourceShort nameOwnerDescription

ClusterServiceVersion (CSV)

csv

OLM

Application metadata: name, version, icon, required resources, installation, and so on.

InstallPlan

ip

Catalog

Calculated list of resources to be created to automatically install or upgrade a CSV.

CatalogSource

catsrc

Catalog

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Catalog

Used to keep CSVs up to date by tracking a channel in a package.

OperatorGroup

og

OLM

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

Each of these Operators is also responsible for creating the following resources:

Table 2.4. Resources created by OLM and Catalog Operators
ResourceOwner

Deployments

OLM

ServiceAccounts

(Cluster)Roles

(Cluster)RoleBindings

CustomResourceDefinitions (CRDs)

Catalog

ClusterServiceVersions

2.4.2.2. OLM Operator

The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster.

The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application.

The OLM Operator uses the following workflow:

  1. Watch for cluster service versions (CSVs) in a namespace and check that requirements are met.
  2. If requirements are met, run the install strategy for the CSV.

    Note

    A CSV must be an active member of an Operator group for the install strategy to run.

2.4.2.3. Catalog Operator

The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions.

To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user.

The Catalog Operator uses the following workflow:

  1. Connect to each catalog source in the cluster.
  2. Watch for unresolved install plans created by a user, and if found:

    1. Find the CSV matching the name requested and add the CSV as a resolved resource.
    2. For each managed or required CRD, add the CRD as a resolved resource.
    3. For each required CRD, find the CSV that manages it.
  3. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically.
  4. Watch for catalog sources and subscriptions and create install plans based on them.
2.4.2.4. Catalog Registry

The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels.

A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version.

2.4.3. Operator Lifecycle Manager workflow

This guide outlines the workflow of Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.4.3.1. Operator installation and upgrade workflow in OLM

In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades:

  • ClusterServiceVersion (CSV)
  • CatalogSource
  • Subscription

Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators.

Figure 2.3. Catalog source overview

olm catalogsource

Within a catalog source, Operators are organized into packages and streams of updates called channels, which should be a familiar update pattern from Red Hat OpenShift Service on AWS or other software on a continuous release cycle like web browsers.

Figure 2.4. Packages and channels in a Catalog source

olm channels

A user indicates a particular package and channel in a particular catalog source in a subscription, for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed.

Note

OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalogchannelpackage path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository.

Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates:

Figure 2.5. OLM graph of available channel updates

olm replaces

Example channels in a package

packageName: example
channels:
- name: alpha
  currentCSV: example.v0.1.2
- name: beta
  currentCSV: example.v0.1.3
defaultChannel: alpha

For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV.

2.4.3.1.1. Example upgrade path

For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1. OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2, which in turn replaces the older and installed CSV version 0.1.1.

OLM walks back from the channel head to previous versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.30.1.20.1.1; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head.

For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1. Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2. At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed.

2.4.3.1.2. Skipping upgrades

The basic path for upgrades in OLM is:

  • A catalog source is updated with one or more updates to an Operator.
  • OLM traverses every version of the Operator until reaching the latest version the catalog source contains.

However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability.

In those cases, OLM must consider two cluster states and provide an update graph that supports both:

  • The "bad" intermediate Operator has been seen by the cluster and installed.
  • The "bad" intermediate Operator has not yet been installed onto the cluster.

By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet.

Example CSV with skipped release

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: etcdoperator.v0.9.2
  namespace: placeholder
  annotations:
spec:
    displayName: etcd
    description: Etcd Operator
    replaces: etcdoperator.v0.9.0
    skips:
    - etcdoperator.v0.9.1

Consider the following example of Old CatalogSource and New CatalogSource.

Figure 2.6. Skipping updates

olm skipping updates

This graph maintains that:

  • Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
  • Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
  • If the bad update has not yet been installed, it will never be.
2.4.3.1.3. Replacing multiple Operators

Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation:

olm.skipRange: <semver_range>

where <semver_range> has the version range format supported by the semver library.

When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel.

The order of precedence is:

  1. Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met.
  2. The next Operator that replaces the current one, in the source specified by sourceName.
  3. Channel head in another source that is visible to the subscription, if the other criteria for skipping are met.
  4. The next Operator that replaces the current one in any source visible to the subscription.

Example CSV with skipRange

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
    name: elasticsearch-operator.v4.1.2
    namespace: <namespace>
    annotations:
        olm.skipRange: '>=4.1.0 <4.1.2'

2.4.3.1.4. Z-stream support

A z-stream, or patch release, must replace all previous z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog.

In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource:

Figure 2.7. Replacing several Operators

olm z stream

This graph maintains that:

  • Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
  • Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
  • Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource.
  • Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this.

2.4.4. Operator Lifecycle Manager dependency resolution

This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.4.4.1. About dependency resolution

Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm.

However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other.

As a result, OLM must never create the following scenarios:

  • Install a set of Operators that require APIs that cannot be provided
  • Update an Operator in a way that breaks another that depends upon it

This is made possible with two types of data:

Properties

Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator.

Constraints or dependencies

An Operator’s requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed.

OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed.

2.4.4.2. Operator properties

All Operators in a catalog have the following properties:

olm.package
Includes the name of the package and the version of the Operator
olm.gvk
A single property for each provided API from the cluster service version (CSV)

Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle.

Example arbitrary property

properties:
- type: olm.kubeversion
  value:
    version: "1.16.0"

2.4.4.2.1. Arbitrary properties

Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime.

These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list.

Example arbitrary properties

properties:
  - property:
      type: color
      value: red
  - property:
      type: shape
      value: square
  - property:
      type: olm.gvk
      value:
        group: olm.coreos.io
        version: v1alpha1
        kind: myresource

This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints.

2.4.4.3. Operator dependencies

The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.

The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported:

olm.package
This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1.
olm.gvk
With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.
olm.constraint
This type declares generic constraints on arbitrary Operator properties.

In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:

Example dependencies.yaml file

dependencies:
  - type: olm.package
    value:
      packageName: prometheus
      version: ">0.27.0"
  - type: olm.gvk
    value:
      group: etcd.database.coreos.com
      kind: EtcdCluster
      version: v1beta2

2.4.4.4. Generic constraints

An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime.

The following keys denote the available constraint types:

gvk
Type whose value and interpretation is identical to the olm.gvk type
package
Type whose value and interpretation is identical to the olm.package type
cel
A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information
all, any, not
Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint
2.4.4.4.1. Common Expression Language (CEL) constraints

The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint.

Example cel constraint

type: olm.constraint
value:
  failureMessage: 'require to have "certified"'
  cel:
    rule: 'properties.exists(p, p.type == "certified")'

The CEL syntax supports a wide range of logical operators, such as AND and OR. As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint.

Example cel constraint with multiple rules

type: olm.constraint
value:
  failureMessage: 'require to have "certified" and "stable" properties'
  cel:
    rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")'

2.4.4.4.2. Compound constraints (all, any, not)

Compound constraint types are evaluated following their logical definitions.

The following is an example of a conjunctive constraint (all) of two packages and one GVK. That is, they must all be satisfied by installed bundles:

Example all constraint

schema: olm.bundle
name: red.v1.0.0
properties:
- type: olm.constraint
  value:
    failureMessage: All are required for Red because...
    all:
      constraints:
      - failureMessage: Package blue is needed for...
        package:
          name: blue
          versionRange: '>=1.0.0'
      - failureMessage: GVK Green/v1 is needed for...
        gvk:
          group: greens.example.com
          version: v1
          kind: Green

The following is an example of a disjunctive constraint (any) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles:

Example any constraint

schema: olm.bundle
name: red.v1.0.0
properties:
- type: olm.constraint
  value:
    failureMessage: Any are required for Red because...
    any:
      constraints:
      - gvk:
          group: blues.example.com
          version: v1beta1
          kind: Blue
      - gvk:
          group: blues.example.com
          version: v1beta2
          kind: Blue
      - gvk:
          group: blues.example.com
          version: v1
          kind: Blue

The following is an example of a negation constraint (not) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set:

Example not constraint

schema: olm.bundle
name: red.v1.0.0
properties:
- type: olm.constraint
  value:
  all:
    constraints:
    - failureMessage: Package blue is needed for...
      package:
        name: blue
        versionRange: '>=1.0.0'
    - failureMessage: Cannot be required for Red because...
      not:
        constraints:
        - gvk:
            group: greens.example.com
            version: v1alpha1
            kind: greens

The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set.

As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense.

2.4.4.4.3. Nested compound constraints

A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type.

The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint:

Example nested compound constraint

schema: olm.bundle
name: red.v1.0.0
properties:
- type: olm.constraint
  value:
    failureMessage: Required for Red because...
    any:
      constraints:
      - all:
          constraints:
          - package:
              name: blue
              versionRange: '>=1.0.0'
          - gvk:
              group: blues.example.com
              version: v1
              kind: Blue
      - all:
          constraints:
          - package:
              name: blue
              versionRange: '<1.0.0'
          - gvk:
              group: blues.example.com
              version: v1beta1
              kind: Blue

Note

The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks.

2.4.4.5. Dependency preferences

There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear.

2.4.4.5.1. Catalog priority

On Red Hat OpenShift Service on AWS cluster, OLM reads catalog sources to know which Operators are available for installation.

Example CatalogSource object

apiVersion: "operators.coreos.com/v1alpha1"
kind: "CatalogSource"
metadata:
  name: "my-operators"
  namespace: "operators"
spec:
  sourceType: grpc
  grpcPodConfig:
    securityContextConfig: <security_mode> 1
  image: example.com/my/operator-index:v1
  displayName: "My Operators"
  priority: 100

1
Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future Red Hat OpenShift Service on AWS release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.

A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency.

There are two rules that govern catalog preference:

  • Options in higher-priority catalogs are preferred to options in lower-priority catalogs.
  • Options in the same catalog as the dependent are preferred to any other catalogs.
2.4.4.5.2. Channel ordering

An Operator package in a catalog is a collection of update channels that a user can subscribe to in an Red Hat OpenShift Service on AWS cluster. Channels can be used to provide a particular stream of updates for a minor release (1.2, 1.3) or a release frequency (stable, fast).

It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels.

Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name.

2.4.4.5.3. Order within a channel

There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs.

When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency.

Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first.

2.4.4.5.4. Other constraints

In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants.

2.4.4.5.4.1. Subscription constraint

A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated.

2.4.4.5.4.2. Package constraint

Within a namespace, no two Operators may come from the same package.

2.4.4.5.5. Additional resources
2.4.4.6. CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.
  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
2.4.4.7. Dependency best practices

When specifying dependencies, there are best practices you should consider.

Depend on APIs or a specific version range of Operators
Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead.
Set a minimum version

The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible.

For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended.

For example:

  • TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource.
  • TestOperator v1.0.1 adds a new field spec.newfield to MyObject, but still at v1alpha1.

Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0.

Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum.

Omit a maximum version or allow a very wide range

Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency.

Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0.

Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the previous version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound.

Note

Cluster administrators cannot override dependencies set by an Operator author.

However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1.

Additional resources

2.4.4.8. Dependency caveats

When specifying dependencies, there are caveats you should consider.

No compound constraints (AND)

There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0.

This means that when specifying a dependency such as:

dependencies:
- type: olm.package
  value:
    packageName: etcd
    version: ">3.1.0"
- type: olm.gvk
  value:
    group: etcd.database.coreos.com
    kind: EtcdCluster
    version: v1beta2

It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0. Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other.

Cross-namespace compatibility
OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa.
2.4.4.9. Example dependency resolution scenarios

In the following examples, a provider is an Operator which "owns" a CRD or API service.

Example: Deprecating dependent APIs

A and B are APIs (CRDs):

  • The provider of A depends on B.
  • The provider of B has a subscription.
  • The provider of B updates to provide C but deprecates B.

This results in:

  • B no longer has a provider.
  • A no longer works.

This is a case OLM prevents with its upgrade strategy.

Example: Version deadlock

A and B are APIs:

  • The provider of A requires B.
  • The provider of B requires A.
  • The provider of A updates to (provide A2, require B2) and deprecate A.
  • The provider of B updates to (provide B2, require A2) and deprecate B.

If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found.

This is another case OLM prevents with its upgrade strategy.

2.4.5. Operator groups

This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

2.4.5.1. About Operator groups

An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.

The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments.

2.4.5.2. Operator group membership

An Operator is considered a member of an Operator group if the following conditions are true:

  • The CSV of the Operator exists in the same namespace as the Operator group.
  • The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group.

An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes:

Table 2.5. Install modes and supported Operator groups
InstallModeTypeDescription

OwnNamespace

The Operator can be a member of an Operator group that selects its own namespace.

SingleNamespace

The Operator can be a member of an Operator group that selects one namespace.

MultiNamespace

The Operator can be a member of an Operator group that selects more than one namespace.

AllNamespaces

The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "").

Note

If the spec of a CSV omits an entry of InstallModeType, then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it.

2.4.5.3. Target namespace selection

You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace
spec:
  targetNamespaces:
  - my-namespace

You can alternatively specify a namespace using a label selector with the spec.selector parameter:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace
spec:
  selector:
    cool.io/prod: "true"
Important

Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release.

If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace

The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.

2.4.5.4. Operator group CSV annotations

Member CSVs of an Operator group have the following annotations:

AnnotationDescription

olm.operatorGroup=<group_name>

Contains the name of the Operator group.

olm.operatorNamespace=<group_namespace>

Contains the namespace of the Operator group.

olm.targetNamespaces=<target_namespaces>

Contains a comma-delimited string that lists the target namespace selection of the Operator group.

Note

All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants.

2.4.5.5. Provided APIs annotation

A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included.

Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  annotations:
    olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com
  name: olm-operators
  namespace: local
  ...
spec:
  selector: {}
  serviceAccount:
    metadata:
      creationTimestamp: null
  targetNamespaces:
  - local
status:
  lastUpdated: 2019-02-19T16:18:28Z
  namespaces:
  - local
2.4.5.6. Role-based access control

When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below:

Cluster roleLabel to match

olm.og.<operatorgroup_name>-admin-<hash_value>

olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

olm.og.<operatorgroup_name>-edit-<hash_value>

olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

olm.og.<operatorgroup_name>-view-<hash_value>

olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict:

  • Cluster roles for each API resource from a CRD
  • Cluster roles for each API resource from an API service
  • Additional roles and role bindings
Table 2.6. Cluster roles generated for each API resource from a CRD
Cluster roleSettings

<kind>.<group>-<version>-admin

Verbs on <kind>:

  • *

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-admin: true
  • olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

<kind>.<group>-<version>-edit

Verbs on <kind>:

  • create
  • update
  • patch
  • delete

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-edit: true
  • olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

<kind>.<group>-<version>-view

Verbs on <kind>:

  • get
  • list
  • watch

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

<kind>.<group>-<version>-view-crdview

Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name>:

  • get

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>
Table 2.7. Cluster roles generated for each API resource from an API service
Cluster roleSettings

<kind>.<group>-<version>-admin

Verbs on <kind>:

  • *

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-admin: true
  • olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

<kind>.<group>-<version>-edit

Verbs on <kind>:

  • create
  • update
  • patch
  • delete

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-edit: true
  • olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

<kind>.<group>-<version>-view

Verbs on <kind>:

  • get
  • list
  • watch

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

Additional roles and role bindings

  • If the CSV defines exactly one target namespace that contains *, then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels.
  • If the CSV does not define exactly one target namespace that contains *, then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace.
2.4.5.7. Copied CSVs

OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there.

Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants.

Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV.

Note

By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again.

  • Disable the disableCopiedCSVs field:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OLMConfig
    metadata:
      name: cluster
    spec:
      features:
        disableCopiedCSVs: false
    EOF
  • Enable the disableCopiedCSVs field:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OLMConfig
    metadata:
      name: cluster
    spec:
      features:
        disableCopiedCSVs: true
    EOF
2.4.5.8. Static Operator groups

An Operator group is static if its spec.staticProvidedAPIs field is set to true. As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources.

Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: cluster-monitoring
  namespace: cluster-monitoring
  annotations:
    olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com
spec:
  staticProvidedAPIs: true
  selector:
    matchLabels:
      something.cool.io/cluster-monitoring: "true"
2.4.5.9. Operator group intersection

Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set.

A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces.

Note

When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces.

Rules for intersection

Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set:

  • If true and the CSV’s provided APIs are a subset of the Operator group’s:

    • Continue transitioning.
  • If true and the CSV’s provided APIs are not a subset of the Operator group’s:

    • If the Operator group is static:

      • Clean up any deployments that belong to the CSV.
      • Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs.
    • If the Operator group is not static:

      • Replace the Operator group’s olm.providedAPIs annotation with the union of itself and the CSV’s provided APIs.
  • If false and the CSV’s provided APIs are not a subset of the Operator group’s:

    • Clean up any deployments that belong to the CSV.
    • Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict.
  • If false and the CSV’s provided APIs are a subset of the Operator group’s:

    • If the Operator group is static:

      • Clean up any deployments that belong to the CSV.
      • Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs.
    • If the Operator group is not static:

      • Replace the Operator group’s olm.providedAPIs annotation with the difference between itself and the CSV’s provided APIs.
Note

Failure states caused by Operator groups are non-terminal.

The following actions are performed each time an Operator group synchronizes:

  • The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored.
  • The cluster set is compared to olm.providedAPIs, and if olm.providedAPIs contains any extra APIs, then those APIs are pruned.
  • All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV.
2.4.5.10. Limitations for multitenant Operator management

Red Hat OpenShift Service on AWS provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator’s API versions must be the same.

Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster.

All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster.

The supported scenarios include the following:

  • Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions)
  • Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub

All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster.

Additional resources

2.4.5.11. Troubleshooting Operator groups
Membership
  • An install plan’s namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios:

    • No Operator groups exist in the install plan’s namespace.
    • Multiple Operator groups exist in the install plan’s namespace.
    • An incorrect or non-existent service account name is specified in the Operator group.

    If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace:

    attenuated service account query failed - more than one operator group(s) are managing this namespace count=2

    where count= specifies the number of Operator groups in the namespace.

  • If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup. CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection.

2.4.6. Multitenancy and Operator colocation

This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM).

2.4.6.1. Colocation of Operators in a namespace

Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated.

This default behavior manifests in two ways:

  • InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace.
  • All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual.

These scenarios can lead to the following issues:

  • It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator.
  • It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators.

These issues usually surface because, when installing Operators with the Red Hat OpenShift Service on AWS web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace.

As an administrator with the dedicated-admin role, you can bypass this default behavior manually by using the following workflow:

  1. Create a project for the installation of the Operator.
  2. Create a custom global Operator group, which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces.
  3. Install the desired Operator in the installation namespace.

If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces".

2.4.7. Operator conditions

This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions.

2.4.7.1. About Operator conditions

As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator.

OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource.

Note

By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic.

2.4.7.2. Supported conditions

Operator Lifecycle Manager (OLM) supports the following Operator conditions.

2.4.7.2.1. Upgradeable condition

The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when:

  • An Operator is about to start a critical process and should not be upgraded until the process is completed.
  • An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded.
Important

Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section.

Example Upgradeable Operator condition

apiVersion: operators.coreos.com/v1
kind: OperatorCondition
metadata:
  name: my-operator
  namespace: operators
spec:
  conditions:
  - type: Upgradeable 1
    status: "False" 2
    reason: "migration"
    message: "The Operator is performing a migration."
    lastTransitionTime: "2020-08-24T23:15:55Z"

1
Name of the condition.
2
A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades.
2.4.7.3. Additional resources

2.4.8. Operator Lifecycle Manager metrics

2.4.8.1. Exposed metrics

Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based Red Hat OpenShift Service on AWS cluster monitoring stack.

Table 2.8. Metrics exposed by OLM
NameDescription

catalog_source_count

Number of catalog sources.

catalogsource_ready

State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state.

csv_abnormal

When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded, for example when it is not installed. Includes the name, namespace, phase, reason, and version labels. A Prometheus alert is created when this metric is present.

csv_count

Number of CSVs successfully registered.

csv_succeeded

When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1) or not (value 0). Includes the name, namespace, and version labels.

csv_upgrade_count

Monotonic count of CSV upgrades.

install_plan_count

Number of install plans.

installplan_warnings_total

Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan.

olm_resolution_duration_seconds

The duration of a dependency resolution attempt.

subscription_count

Number of subscriptions.

subscription_sync_total

Monotonic count of subscription syncs. Includes the channel, installed CSV, and subscription name labels.

2.4.9. Webhook management in Operator Lifecycle Manager

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM.

2.4.9.1. Additional resources

2.5. Understanding OperatorHub

2.5.1. About OperatorHub

OperatorHub is the web console interface in Red Hat OpenShift Service on AWS that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM).

Cluster administrators can choose from catalogs grouped into the following categories:

CategoryDescription

Red Hat Operators

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

Certified Operators

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

Red Hat Marketplace

Certified software that can be purchased from Red Hat Marketplace.

Community Operators

Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support.

Custom Operators

Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub.

Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions.

The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com.

2.5.2. OperatorHub architecture

The OperatorHub UI component is driven by the Marketplace Operator by default on Red Hat OpenShift Service on AWS in the openshift-marketplace namespace.

2.5.2.1. OperatorHub custom resource

The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub.

2.5.3. Additional resources

2.6. Red Hat-provided Operator catalogs

Red Hat provides several Operator catalogs that are included with Red Hat OpenShift Service on AWS by default.

Important

As of Red Hat OpenShift Service on AWS 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for Red Hat OpenShift Service on AWS 4.6 through 4.10 released in the deprecated SQLite database format.

The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.

Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs, and Operator Framework packaging format.

2.6.1. About Operator catalogs

An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog.

An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster.

As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an Red Hat OpenShift Service on AWS cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content.

As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues.

Important

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of Red Hat OpenShift Service on AWS that uses the Kubernetes version that removed the API.

If your cluster is using custom catalogs, see Controlling Operator compatibility with Red Hat OpenShift Service on AWS versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.

Note

Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in Red Hat OpenShift Service on AWS 4.8 and later.

When creating custom catalog images, previous versions of Red Hat OpenShift Service on AWS 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in Red Hat OpenShift Service on AWS 4.6, catalog builders must use the opm index command to manage index images.

2.6.2. About Red Hat-provided Operator catalogs

The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces.

The following Operator catalogs are distributed by Red Hat:

CatalogIndex imageDescription

redhat-operators

registry.redhat.io/redhat/redhat-operator-index:v4

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

registry.redhat.io/redhat/certified-operator-index:v4

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

redhat-marketplace

registry.redhat.io/redhat/redhat-marketplace-index:v4

Certified software that can be purchased from Red Hat Marketplace.

community-operators

registry.redhat.io/redhat/community-operator-index:v4

Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support.

During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from Red Hat OpenShift Service on AWS 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:

registry.redhat.io/redhat/redhat-operator-index:v4.8

to:

registry.redhat.io/redhat/redhat-operator-index:v4.9

2.7. Operators in multitenant clusters

The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a Red Hat OpenShift Service on AWS cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege.

Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements.

2.7.1. Default Operator install modes and behavior

When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator’s capabilities:

Single namespace
Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace.
All namespaces
Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator’s suggested namespace.

This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace:

  • The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them.
  • The namespace-view role can read CR objects of that Operator.

For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator’s privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces.

2.7.2. Recommended solution for multitenant clusters

While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow:

  1. Create a namespace for the tenant Operator that is separate from the tenant’s namespace. You can do this by creating a project.
  2. Create an Operator group for the tenant Operator scoped only to the tenant’s namespace.
  3. Install the Operator in the tenant Operator namespace.

As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.

This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters".

Limitations and considerations

This solution only works when the following constraints are met:

  • All instances of the same Operator must be the same version.
  • The Operator cannot have dependencies on other Operators.
  • The Operator cannot ship a CRD conversion webhook.
Important

You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions:

  • The instance is not the newest version of the Operator.
  • The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster.

2.7.3. Operator colocation and Operator groups

Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated.

For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) → Multitenancy and Operator colocation.

2.8. CRDs

2.8.1. Managing resources from custom resource definitions

This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs).

2.8.1.1. Custom resource definitions

In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects.

A custom resource definition (CRD) object defines a new, unique object type, called a kind, in the cluster and lets the Kubernetes API server handle its entire lifecycle.

Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects.

Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic.

2.8.1.2. Creating custom resources from a file

After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification.

Procedure

  1. Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab. The Kind comes from the spec.kind field of the CRD object:

    Example YAML file for a CR

    apiVersion: "stable.example.com/v1" 1
    kind: CronTab 2
    metadata:
      name: my-new-cron-object 3
      finalizers: 4
      - finalizer.stable.example.com
    spec: 5
      cronSpec: "* * * * /5"
      image: my-awesome-cron-image

    1
    Specify the group name and API version (name/version) from the CRD.
    2
    Specify the type in the CRD.
    3
    Specify a name for the object.
    4
    Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted.
    5
    Specify conditions specific to the type of object.
  2. After you create the file, create the object:

    $ oc create -f <file_name>.yaml
2.8.1.3. Inspecting custom resources

You can inspect custom resource (CR) objects that exist in your cluster using the CLI.

Prerequisites

  • A CR object exists in a namespace to which you have access.

Procedure

  1. To get information on a specific kind of a CR, run:

    $ oc get <kind>

    For example:

    $ oc get crontab

    Example output

    NAME                 KIND
    my-new-cron-object   CronTab.v1.stable.example.com

    Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example:

    $ oc get crontabs
    $ oc get crontab
    $ oc get ct
  2. You can also view the raw YAML data for a CR:

    $ oc get <kind> -o yaml

    For example:

    $ oc get ct -o yaml

    Example output

    apiVersion: v1
    items:
    - apiVersion: stable.example.com/v1
      kind: CronTab
      metadata:
        clusterName: ""
        creationTimestamp: 2017-05-31T12:56:35Z
        deletionGracePeriodSeconds: null
        deletionTimestamp: null
        name: my-new-cron-object
        namespace: default
        resourceVersion: "285"
        selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object
        uid: 9423255b-4600-11e7-af6a-28d2447dc82b
      spec:
        cronSpec: '* * * * /5' 1
        image: my-awesome-cron-image 2

    1 2
    Custom data from the YAML that you used to create the object displays.

Chapter 3. User tasks

3.1. Creating applications from installed Operators

This guide walks developers through an example of creating applications from an installed Operator using the Red Hat OpenShift Service on AWS web console.

3.1.1. Creating an etcd cluster using an Operator

This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to an Red Hat OpenShift Service on AWS cluster.
  • The etcd Operator already installed cluster-wide by an administrator.

Procedure

  1. Create a new project in the Red Hat OpenShift Service on AWS web console for this procedure. This example uses a project called my-etcd.
  2. Navigate to the Operators → Installed Operators page. The Operators that have been installed to the cluster by the dedicated-admin and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator.

    Tip

    You can get this list from the CLI using:

    $ oc get csv
  3. On the Installed Operators page, click the etcd Operator to view more details and available actions.

    As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet, but contain logic specific to managing etcd.

  4. Create a new etcd cluster:

    1. In the etcd Cluster API box, click Create instance.
    2. The next page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster.
  5. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.

    Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.

  6. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:

    $ oc policy add-role-to-user edit <user> -n <target_project>

You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, dedicated-admins or developers with proper access can now easily use the database with their applications.

Chapter 4. Administrator tasks

4.1. Adding Operators to a cluster

Using Operator Lifecycle Manager (OLM), administrators with the dedicated-admin role can install OLM-based Operators to an Red Hat OpenShift Service on AWS cluster.

Note

For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.

4.1.1. About Operator installation with OperatorHub

OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.

As a dedicated-admin, you can install an Operator from OperatorHub by using the Red Hat OpenShift Service on AWS web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.

During installation, you must determine the following initial settings for the Operator:

Installation Mode
Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces…​ to make the Operator available to all users and projects.
Update Channel
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
Approval Strategy

You can choose automatic or manual updates.

If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a dedicated-admin, you must then manually approve that update request to have the Operator updated to the new version.

Additional resources

4.1.2. Installing from OperatorHub by using the web console

You can install and subscribe to an Operator from OperatorHub by using the Red Hat OpenShift Service on AWS web console.

Prerequisites

  • Access to an Red Hat OpenShift Service on AWS cluster using an account with the dedicated-admin role.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.
  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Note

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.
  5. On the Install Operator page, configure your Operator installation:

    1. If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.

      Note

      The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel.

      Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces.

    2. Confirm the installation mode for the Operator:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    3. For clusters on cloud providers with token authentication enabled:

      • If the cluster uses AWS Security Token Service (STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role’s ARN, follow the procedure described in Preparing AWS account.
      • If the cluster uses Microsoft Entra Workload ID (Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate fields.
      • If the cluster uses Google Cloud Platform Workload Identity (GCP Workload Identity / Federated Identity Mode in the web console), add the project number, pool ID, provider ID, and service account email in the appropriate fields.
    4. For Update approval, select either the Automatic or Manual approval strategy.

      Important

      If the web console shows that the cluster uses AWS STS, Microsoft Entra Workload ID, or GCP Workload Identity, you must set Update approval to Manual.

      Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update.

  6. Click Install to make the Operator available to the selected namespaces on this Red Hat OpenShift Service on AWS cluster:

    1. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.

      After approving on the Install Plan page, the subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.

Verification

  • After the upgrade status of the subscription is Up to date, select OperatorsInstalled Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace.

    Note

    For the All namespaces…​ installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    • Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the WorkloadsPods page that are reporting issues to troubleshoot further.
  • When the Operator is installed, the metadata indicates which channel and version are installed.

    Note

    The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context.

4.1.3. Installing from OperatorHub by using the CLI

Instead of using the Red Hat OpenShift Service on AWS web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.

For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

Tip

In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode.

Prerequisites

  • Access to an Red Hat OpenShift Service on AWS cluster using an account with the dedicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace

    Example 4.1. Example output

    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    # ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    # ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    # ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace

    Example 4.2. Example output

    # ...
    Kind:         PackageManifest
    # ...
          Install Modes: 1
            Supported:  true
            Type:       OwnNamespace
            Supported:  true
            Type:       SingleNamespace
            Supported:  false
            Type:       MultiNamespace
            Supported:  true
            Type:       AllNamespaces
    # ...
        Entries:
          Name:       example-operator.v3.7.11
          Version:    3.7.11
          Name:       example-operator.v3.7.10
          Version:    3.7.10
        Name:         stable-3.7 2
    # ...
       Entries:
          Name:         example-operator.v3.8.5
          Version:      3.8.5
          Name:         example-operator.v3.8.4
          Version:      3.8.4
        Name:           stable-3.8 3
      Default Channel:  stable-3.8 4
    1
    Indicates which install modes are supported.
    2 3
    Example channel names.
    4
    The channel selected by default if one is not specified.
    Tip

    You can print an Operator’s version and channel information in YAML format by running the following command:

    $ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
    • If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:

      $ oc get packagemanifest \
         --selector=catalog=<catalogsource_name> \
         --field-selector metadata.name=<operator_name> \
         -n <catalog_namespace> -o yaml
      Important

      If you do not specify the Operator’s catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met:

      • Multiple catalogs are installed in the same namespace.
      • The catalogs contain the same Operators or Operators with the same name.
  3. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators.

    If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps:

    Important

    You can only have one Operator group per namespace. For more information, see "Operator groups".

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml, for SingleNamespace install mode:

      Example OperatorGroup object for SingleNamespace install mode

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace> 1
      spec:
        targetNamespaces:
        - <namespace> 2

      1 2
      For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields.
    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object to subscribe a namespace to an Operator:

    1. Create a YAML file for the Subscription object, for example subscription.yaml:

      Note

      If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version".

      Example 4.3. Example Subscription object

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: <subscription_name>
        namespace: <namespace_per_install_mode> 1
      spec:
        channel: <channel_name> 2
        name: <operator_name> 3
        source: <catalog_name> 4
        sourceNamespace: <catalog_source_namespace> 5
        config:
          env: 6
          - name: ARGS
            value: "-v=10"
          envFrom: 7
          - secretRef:
              name: license-secret
          volumes: 8
          - name: <volume_name>
            configMap:
              name: <configmap_name>
          volumeMounts: 9
          - mountPath: <directory_name>
            name: <volume_name>
          tolerations: 10
          - operator: "Exists"
          resources: 11
            requests:
              memory: "64Mi"
              cpu: "250m"
            limits:
              memory: "128Mi"
              cpu: "500m"
          nodeSelector: 12
            foo: bar
      1
      For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace.
      2
      Name of the channel to subscribe to.
      3
      Name of the Operator to subscribe to.
      4
      Name of the catalog source that provides the Operator.
      5
      Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
      6
      The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM.
      7
      The envFrom parameter defines a list of sources to populate environment variables in the container.
      8
      The volumes parameter defines a list of volumes that must exist on the pod created by OLM.
      9
      The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator.
      10
      The tolerations parameter defines a list of tolerations for the pod created by OLM.
      11
      The resources parameter defines resource constraints for all the containers in the pod created by OLM.
      12
      The nodeSelector parameter defines a NodeSelector for the pod created by OLM.

      Example 4.4. Example Subscription object with a specific starting Operator version

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: example-operator
        namespace: example-operator
      spec:
        channel: stable-3.7
        installPlanApproval: Manual 1
        name: example-operator
        source: custom-operators
        sourceNamespace: openshift-marketplace
        startingCSV: example-operator.v3.7.10 2
      1
      Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
      2
      Set a specific version of an Operator CSV.
    2. For clusters on cloud providers with token authentication enabled, such as Amazon Web Services (AWS) Security Token Service (STS), Microsoft Entra Workload ID, or Google Cloud Platform Workload Identity, configure your Subscription object by following these steps:

      1. Ensure the Subscription object is set to manual update approvals:

        Example 4.5. Example Subscription object with manual update approvals

        kind: Subscription
        # ...
        spec:
          installPlanApproval: Manual 1
        1
        Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update.
      2. Include the relevant cloud provider-specific fields in the Subscription object’s config section:

        • If the cluster is in AWS STS mode, include the following fields:

          Example 4.6. Example Subscription object with AWS STS variables

          kind: Subscription
          # ...
          spec:
            config:
              env:
              - name: ROLEARN
                value: "<role_arn>" 1
          1
          Include the role ARN details.
        • If the cluster is in Workload ID mode, include the following fields:

          Example 4.7. Example Subscription object with Workload ID variables

          kind: Subscription
          # ...
          spec:
           config:
             env:
             - name: CLIENTID
               value: "<client_id>" 1
             - name: TENANTID
               value: "<tenant_id>" 2
             - name: SUBSCRIPTIONID
               value: "<subscription_id>" 3
          1
          Include the client ID.
          2
          Include the tenant ID.
          3
          Include the subscription ID.
        • If the cluster is in GCP Workload Identity mode, include the following fields:

          Example 4.8. Example Subscription object with GCP Workload Identity variables

          kind: Subscription
          # ...
          spec:
           config:
             env:
             - name: AUDIENCE
               value: "<audience_url>" 1
             - name: SERVICE_ACCOUNT_EMAIL
               value: "<service_account_email>" 2

          where:

          <audience>

          Created in GCP by the administrator when they set up GCP Workload Identity, the AUDIENCE value must be a preformatted URL in the following format:

          //iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>
          <service_account_email>

          The SERVICE_ACCOUNT_EMAIL value is a GCP service account email that is impersonated during Operator operation, for example:

          <service_account_name>@<project_id>.iam.gserviceaccount.com
    3. Create the Subscription object by running the following command:

      $ oc apply -f subscription.yaml
  5. If you set the installPlanApproval field to Manual, manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update".

At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Verification

  1. Check the status of the Subscription object for your installed Operator by running the following command:

    $ oc describe subscription <subscription_name> -n <namespace>
  2. If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command:

    $ oc describe operatorgroup <operatorgroup_name> -n <namespace>

4.1.4. Preparing for multiple instances of an Operator for multitenant clusters

As an administrator with the dedicated-admin role, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".

In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • All instances of the Operator you want to install must be the same version across a given cluster.

    Important

    For more information on this and other limitations, see "Operators in multitenant clusters".

Procedure

  1. Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. You can do this by creating a project. For example, if the tenant’s namespace is team1, you might create a team1-operator project:

    $ oc new-project team1-operator
  2. Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the spec.targetNamespaces list:

    1. Define an OperatorGroup resource and save the YAML file, for example, team1-operatorgroup.yaml:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: team1-operatorgroup
        namespace: team1-operator
      spec:
        targetNamespaces:
        - team1 1
      1 1
      Define only the tenant’s namespace in the spec.targetNamespaces list.
    2. Create the Operator group by running the following command:

      $ oc create -f team1-operatorgroup.yaml

Next steps

  • Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.

    Note

    After completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.

Additional resources

4.1.5. Installing global Operators in custom namespaces

When installing Operators with the Red Hat OpenShift Service on AWS web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see "Multitenancy and Operator colocation".

As an administrator with the dedicated-admin role, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  1. Before installing the Operator, create a namespace for the installation of your desired Operator. You can do this by creating a project. The namespace for this project will become the custom global namespace:

    $ oc new-project global-operators
  2. Create a custom global Operator group, which is an Operator group that watches all namespaces:

    1. Define an OperatorGroup resource and save the YAML file, for example, global-operatorgroup.yaml. Omit both the spec.selector and spec.targetNamespaces fields to make it a global Operator group, which selects all namespaces:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: global-operatorgroup
        namespace: global-operators
      Note

      The status.namespaces of a created global Operator group contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.

    2. Create the Operator group by running the following command:

      $ oc create -f global-operatorgroup.yaml

Next steps

  • Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (oc). For a detailed procedure, see Installing from OperatorHub using the CLI.

    Note

    When you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.

4.1.6. Pod placement of Operator workloads

By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.

Controlling pod placement of Operator and Operand workloads has the following prerequisites:

  1. Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app, that identifies the node or nodes. Otherwise, add a label, such as myoperator, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.
  2. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.
  3. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.

At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:

For Operator pods
Administrators can create a Subscription object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes.
For Operand pods
Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.

4.1.7. Controlling where an Operator is installed

By default, when you install an Operator, Red Hat OpenShift Service on AWS installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.

The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:

  • If you want Operators that work together scheduled on the same host or on hosts located on the same rack
  • If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues

You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.

The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:

Node affinity example that places the Operator pod on a specific node

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      nodeAffinity: 1
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-163-94.us-west-2.compute.internal
#...

1
A node affinity that requires the Operator’s pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal.

Node affinity example that places the Operator pod on a node with a specific platform

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      nodeAffinity: 1
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/arch
              operator: In
              values:
              - arm64
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
#...

1
A node affinity that requires the Operator’s pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels.

Pod affinity example that places the Operator pod on one or more specific nodes

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      podAffinity: 1
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - test
          topologyKey: kubernetes.io/hostname
#...

1
A pod affinity that places the Operator’s pod on a node that has pods with the app=test label.

Pod anti-affinity example that prevents the Operator pod from one or more specific nodes

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      podAntiAffinity: 1
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: cpu
              operator: In
              values:
              - high
          topologyKey: kubernetes.io/hostname
#...

1
A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label.

Procedure

To control the placement of an Operator pod, complete the following steps:

  1. Install the Operator as usual.
  2. If needed, ensure that your nodes are labeled to properly respond to the affinity.
  3. Edit the Operator Subscription object to add an affinity:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-custom-metrics-autoscaler-operator
      namespace: openshift-keda
    spec:
      name: my-package
      source: my-operators
      sourceNamespace: operator-registries
      config:
        affinity: 1
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                  - ip-10-0-185-229.ec2.internal
    #...
    1
    Add a nodeAffinity, podAffinity, or podAntiAffinity. See the Additional resources section that follows for information about creating the affinity.

Verification

  • To ensure that the pod is deployed on the specific node, run the following command:

    $ oc get pods -o wide

    Example output

    NAME                                                  READY   STATUS    RESTARTS   AGE   IP            NODE                           NOMINATED NODE   READINESS GATES
    custom-metrics-autoscaler-operator-5dcc45d656-bhshg   1/1     Running   0          50s   10.131.0.20   ip-10-0-185-229.ec2.internal   <none>           <none>

4.2. Updating installed Operators

As an administrator with the dedicated-admin role, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your Red Hat OpenShift Service on AWS cluster.

Note

For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.

4.2.1. Preparing for an Operator update

The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.

The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).

Note

You cannot change installed Operators to a channel that is older than the current channel.

Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:

You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of Red Hat OpenShift Service on AWS. Cluster Version Operator-based Operators are not included.

4.2.2. Changing the update channel for an Operator

You can change the update channel for an Operator by using the Red Hat OpenShift Service on AWS web console.

Tip

If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the web console, navigate to Operators → Installed Operators.
  2. Click the name of the Operator you want to change the update channel for.
  3. Click the Subscription tab.
  4. Click the name of the update channel under Update channel.
  5. Click the newer update channel that you want to change to, then click Save.
  6. For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

    For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.

4.2.3. Manually approving a pending Operator update

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the Red Hat OpenShift Service on AWS web console, navigate to Operators → Installed Operators.
  2. Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
  3. Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for update. When satisfied, click Approve.
  6. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

4.3. Deleting Operators from a cluster

The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your Red Hat OpenShift Service on AWS cluster.

Important

You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator.

4.3.1. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • You have access to an Red Hat OpenShift Service on AWS cluster web console using an account with dedicated-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

4.3.2. Deleting Operators from a cluster using the CLI

Cluster administrators can delete installed Operators from a selected namespace by using the CLI.

Prerequisites

  • You have access to an Red Hat OpenShift Service on AWS cluster using an account with dedicated-admin permissions.
  • The OpenShift CLI (oc) is installed on your workstation.

Procedure

  1. Ensure the latest version of the subscribed operator (for example, serverless-operator) is identified in the currentCSV field.

    $ oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV

    Example output

      currentCSV: serverless-operator.v1.28.0

  2. Delete the subscription (for example, serverless-operator):

    $ oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless

    Example output

    subscription.operators.coreos.com "serverless-operator" deleted

  3. Delete the CSV for the Operator in the target namespace using the currentCSV value from the previous step:

    $ oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless

    Example output

    clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted

4.3.3. Refreshing failing subscriptions

In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors:

Example output

ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"

Example output

rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host

As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.

You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.

Prerequisites

  • You have a failing subscription that is unable to pull an inaccessible bundle image.
  • You have confirmed that the correct bundle image is accessible.

Procedure

  1. Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed:

    $ oc get sub,csv -n <namespace>

    Example output

    NAME                                                       PACKAGE                  SOURCE             CHANNEL
    subscription.operators.coreos.com/elasticsearch-operator   elasticsearch-operator   redhat-operators   5.0
    
    NAME                                                                         DISPLAY                            VERSION    REPLACES   PHASE
    clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65   OpenShift Elasticsearch Operator   5.0.0-65              Succeeded

  2. Delete the subscription:

    $ oc delete subscription <subscription_name> -n <namespace>
  3. Delete the cluster service version:

    $ oc delete csv <csv_name> -n <namespace>
  4. Get the names of any failing jobs and related config maps in the openshift-marketplace namespace:

    $ oc get job,configmap -n openshift-marketplace

    Example output

    NAME                                                                        COMPLETIONS   DURATION   AGE
    job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb   1/1           26s        9m30s
    
    NAME                                                                        DATA   AGE
    configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb   3      9m30s

  5. Delete the job:

    $ oc delete job <job_name> -n openshift-marketplace

    This ensures pods that try to pull the inaccessible image are not recreated.

  6. Delete the config map:

    $ oc delete configmap <configmap_name> -n openshift-marketplace
  7. Reinstall the Operator using OperatorHub in the web console.

Verification

  • Check that the Operator has been reinstalled successfully:

    $ oc get sub,csv,installplan -n <namespace>

4.4. Configuring proxy support in Operator Lifecycle Manager

If a global proxy is configured on the Red Hat OpenShift Service on AWS cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.

Additional resources

  • Developing Operators that support proxy settings for Go, Ansible, and Helm

4.4.1. Overriding proxy settings of an Operator

If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Administrators with the dedicated-admin role can also override these proxy settings by configuring the subscription of an Operator.

Important

Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.

Prerequisites

  • Access to a Red Hat OpenShift Service on AWS cluster as a user with the dedicated-admin role.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.
  2. Select the Operator and click Install.
  3. On the Install Operator page, modify the Subscription object to include one or more of the following environment variables in the spec section:

    • HTTP_PROXY
    • HTTPS_PROXY
    • NO_PROXY

    For example:

    Subscription object with proxy setting overrides

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: etcd-config-test
      namespace: openshift-operators
    spec:
      config:
        env:
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
      channel: clusterwide-alpha
      installPlanApproval: Automatic
      name: etcd
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: etcdoperator.v0.9.4-clusterwide

    Note

    These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.

    OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.

  4. Click Install to make the Operator available to the selected namespaces.
  5. After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:

    $ oc get deployment -n openshift-operators \
        etcd-operator -o yaml \
        | grep -i "PROXY" -A 2

    Example output

            - name: HTTP_PROXY
              value: test_http
            - name: HTTPS_PROXY
              value: test_https
            - name: NO_PROXY
              value: test
            image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c
    ...

4.4.2. Injecting a custom CA certificate

When an administrator with the dedicated-admin role adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.

Prerequisites

  • Access to a Red Hat OpenShift Service on AWS cluster as a user with the dedicated-admin role.
  • Custom CA certificate added to the cluster using a config map.
  • Desired Operator installed and running on OLM.

Procedure

  1. Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: trusted-ca 1
      labels:
        config.openshift.io/inject-trusted-cabundle: "true" 2
    1
    Name of the config map.
    2
    Requests the Cluster Network Operator to inject the merged bundle.

    After creating this config map, it is immediately populated with the certificate contents of the merged bundle.

  2. Update the Subscription object to include a spec.config section that mounts the trusted-ca config map as a volume to each container within a pod that requires a custom CA:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: my-operator
    spec:
      package: etcd
      channel: alpha
      config: 1
        selector:
          matchLabels:
            <labels_for_pods> 2
        volumes: 3
        - name: trusted-ca
          configMap:
            name: trusted-ca
            items:
              - key: ca-bundle.crt 4
                path: tls-ca-bundle.pem 5
        volumeMounts: 6
        - name: trusted-ca
          mountPath: /etc/pki/ca-trust/extracted/pem
          readOnly: true
    1
    Add a config section if it does not exist.
    2
    Specify labels to match pods that are owned by the Operator.
    3
    Create a trusted-ca volume.
    4
    ca-bundle.crt is required as the config map key.
    5
    tls-ca-bundle.pem is required as the config map path.
    6
    Create a trusted-ca volume mount.
    Note

    Deployments of an Operator can fail to validate the authority and display a x509 certificate signed by unknown authority error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set the mountPath as /etc/ssl/certs for trusted-ca by using the subscription of an Operator.

4.5. Viewing Operator status

Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.

4.5.1. Operator subscription condition types

Subscriptions can report the following condition types:

Table 4.1. Subscription condition types
ConditionDescription

CatalogSourcesUnhealthy

Some or all of the catalog sources to be used in resolution are unhealthy.

InstallPlanMissing

An install plan for a subscription is missing.

InstallPlanPending

An install plan for a subscription is pending installation.

InstallPlanFailed

An install plan for a subscription has failed.

ResolutionFailed

The dependency resolution for a subscription has failed.

Note

Default Red Hat OpenShift Service on AWS cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

Additional resources

4.5.2. Viewing Operator subscription status by using the CLI

You can view Operator subscription status by using the CLI.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operator subscriptions:

    $ oc get subs -n <operator_namespace>
  2. Use the oc describe command to inspect a Subscription resource:

    $ oc describe sub <subscription_name> -n <operator_namespace>
  3. In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy:

    Example output

    Name:         cluster-logging
    Namespace:    openshift-logging
    Labels:       operators.coreos.com/cluster-logging.openshift-logging=
    Annotations:  <none>
    API Version:  operators.coreos.com/v1alpha1
    Kind:         Subscription
    # ...
    Conditions:
       Last Transition Time:  2019-07-29T13:42:57Z
       Message:               all available catalogsources are healthy
       Reason:                AllCatalogSourcesHealthy
       Status:                False
       Type:                  CatalogSourcesUnhealthy
    # ...

Note

Default Red Hat OpenShift Service on AWS cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

4.5.3. Viewing Operator catalog source status by using the CLI

You can view the status of an Operator catalog source by using the CLI.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources:

    $ oc get catalogsources -n openshift-marketplace

    Example output

    NAME                  DISPLAY               TYPE   PUBLISHER   AGE
    certified-operators   Certified Operators   grpc   Red Hat     55m
    community-operators   Community Operators   grpc   Red Hat     55m
    example-catalog       Example Catalog       grpc   Example Org 2m25s
    redhat-marketplace    Red Hat Marketplace   grpc   Red Hat     55m
    redhat-operators      Red Hat Operators     grpc   Red Hat     55m

  2. Use the oc describe command to get more details and status about a catalog source:

    $ oc describe catalogsource example-catalog -n openshift-marketplace

    Example output

    Name:         example-catalog
    Namespace:    openshift-marketplace
    Labels:       <none>
    Annotations:  operatorframework.io/managed-by: marketplace-operator
                  target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"}
    API Version:  operators.coreos.com/v1alpha1
    Kind:         CatalogSource
    # ...
    Status:
      Connection State:
        Address:              example-catalog.openshift-marketplace.svc:50051
        Last Connect:         2021-09-09T17:07:35Z
        Last Observed State:  TRANSIENT_FAILURE
      Registry Service:
        Created At:         2021-09-09T17:05:45Z
        Port:               50051
        Protocol:           grpc
        Service Name:       example-catalog
        Service Namespace:  openshift-marketplace
    # ...

    In the preceding example output, the last observed state is TRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source.

  3. List the pods in the namespace where your catalog source was created:

    $ oc get pods -n openshift-marketplace

    Example output

    NAME                                    READY   STATUS             RESTARTS   AGE
    certified-operators-cv9nn               1/1     Running            0          36m
    community-operators-6v8lp               1/1     Running            0          36m
    marketplace-operator-86bfc75f9b-jkgbc   1/1     Running            0          42m
    example-catalog-bwt8z                   0/1     ImagePullBackOff   0          3m55s
    redhat-marketplace-57p8c                1/1     Running            0          36m
    redhat-operators-smxx8                  1/1     Running            0          36m

    When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image.

  4. Use the oc describe command to inspect a pod for more detailed information:

    $ oc describe pod example-catalog-bwt8z -n openshift-marketplace

    Example output

    Name:         example-catalog-bwt8z
    Namespace:    openshift-marketplace
    Priority:     0
    Node:         ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2
    ...
    Events:
      Type     Reason          Age                From               Message
      ----     ------          ----               ----               -------
      Normal   Scheduled       48s                default-scheduler  Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd
      Normal   AddedInterface  47s                multus             Add eth0 [10.131.0.40/23] from openshift-sdn
      Normal   BackOff         20s (x2 over 46s)  kubelet            Back-off pulling image "quay.io/example-org/example-catalog:v1"
      Warning  Failed          20s (x2 over 46s)  kubelet            Error: ImagePullBackOff
      Normal   Pulling         8s (x3 over 47s)   kubelet            Pulling image "quay.io/example-org/example-catalog:v1"
      Warning  Failed          8s (x3 over 47s)   kubelet            Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized
      Warning  Failed          8s (x3 over 47s)   kubelet            Error: ErrImagePull

    In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.

4.6. Managing Operator conditions

As an administrator with the dedicated-admin role, you can manage Operator conditions by using Operator Lifecycle Manager (OLM).

4.6.1. Overriding Operator conditions

As an administrator with the dedicated-admin role, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides array override the conditions in the Spec.Conditions array, allowing dedicated-admin administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM).

Note

By default, the Spec.Overrides array is not present in an OperatorCondition object until it is added by an administrator with the dedicated-admin role . The Spec.Conditions array is also not present until it is either added by a user or as a result of custom Operator logic.

For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type and status to the Spec.Overrides array in the OperatorCondition object.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • An Operator with an OperatorCondition object, installed using OLM.

Procedure

  1. Edit the OperatorCondition object for the Operator:

    $ oc edit operatorcondition <name>
  2. Add a Spec.Overrides array to the object:

    Example Operator condition override

    apiVersion: operators.coreos.com/v2
    kind: OperatorCondition
    metadata:
      name: my-operator
      namespace: operators
    spec:
      overrides:
      - type: Upgradeable 1
        status: "True"
        reason: "upgradeIsSafe"
        message: "This is a known issue with the Operator where it always reports that it cannot be upgraded."
      conditions:
      - type: Upgradeable
        status: "False"
        reason: "migration"
        message: "The operator is performing a migration."
        lastTransitionTime: "2020-08-24T23:15:55Z"

    1
    Allows the dedicated-admin user to change the upgrade readiness to True.

4.6.2. Updating your Operator to use Operator conditions

Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition resource for each ClusterServiceVersion resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition owned by the Operator.

An Operator author can develop their Operator to use the operator-lib library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page.

4.6.2.1. Setting defaults

In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true. This provides the Operator with a grace period to update the condition to the correct state.

4.6.3. Additional resources

4.7. Managing custom catalogs

Administrators with the dedicated-admin role and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in Red Hat OpenShift Service on AWS.

Important

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of Red Hat OpenShift Service on AWS that uses the Kubernetes version that removed the API.

If your cluster is using custom catalogs, see Controlling Operator compatibility with Red Hat OpenShift Service on AWS versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.

4.7.1. Prerequisites

4.7.2. File-based catalogs

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.

Important

As of Red Hat OpenShift Service on AWS 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for Red Hat OpenShift Service on AWS 4.6 through 4.10 released in the deprecated SQLite database format.

The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.

Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format.

4.7.2.1. Creating a file-based catalog image

You can use the opm CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format.

Prerequisites

  • You have installed the opm CLI.
  • You have podman version 1.9.3+.
  • A bundle image is built and pushed to a registry that supports Docker v2-2.

Procedure

  1. Initialize the catalog:

    1. Create a directory for the catalog by running the following command:

      $ mkdir <catalog_dir>
    2. Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command:

      $ opm generate dockerfile <catalog_dir> \
          -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4 1
      1
      Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image.

      The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:

      Example directory structure

      . 1
      ├── <catalog_dir> 2
      └── <catalog_dir>.Dockerfile 3

      1
      Parent directory
      2
      Catalog directory
      3
      Dockerfile generated by the opm generate dockerfile command
    3. Populate the catalog with the package definition for your Operator by running the opm init command:

      $ opm init <operator_name> \ 1
          --default-channel=preview \ 2
          --description=./README.md \ 3
          --icon=./operator-icon.svg \ 4
          --output yaml \ 5
          > <catalog_dir>/index.yaml 6
      1
      Operator, or package, name
      2
      Channel that subscriptions default to if unspecified
      3
      Path to the Operator’s README.md or other documentation
      4
      Path to the Operator’s icon
      5
      Output format: JSON or YAML
      6
      Path for creating the catalog configuration file

      This command generates an olm.package declarative config blob in the specified catalog configuration file.

  2. Add a bundle to the catalog by running the opm render command:

    $ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1
        --output=yaml \
        >> <catalog_dir>/index.yaml 2
    1
    Pull spec for the bundle image
    2
    Path to the catalog configuration file
    Note

    Channels must contain at least one bundle.

  3. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <catalog_dir>/index.yaml file:

    Example channel entry

    ---
    schema: olm.channel
    package: <operator_name>
    name: preview
    entries:
      - name: <operator_name>.v0.1.0 1

    1
    Ensure that you include the period (.) after <operator_name> but before the v in the version. Otherwise, the entry fails to pass the opm validate command.
  4. Validate the file-based catalog:

    1. Run the opm validate command against the catalog directory:

      $ opm validate <catalog_dir>
    2. Check that the error code is 0:

      $ echo $?

      Example output

      0

  5. Build the catalog image by running the podman build command:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the catalog image to a registry:

    1. If required, authenticate with your target registry by running the podman login command:

      $ podman login <registry>
    2. Push the catalog image by running the podman push command:

      $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>

Additional resources

4.7.2.2. Updating or filtering a file-based catalog image

You can use the opm CLI to update or filter a catalog image that uses the file-based catalog format. By extracting the contents of an existing catalog image, you can modify the catalog as needed, for example:

  • Adding packages
  • Removing packages
  • Updating existing package entries
  • Detailing deprecation messages per package, channel, and bundle

You can then rebuild the image as an updated version of the catalog.

Prerequisites

  • You have the following on your workstation:

    • The opm CLI.
    • podman version 1.9.3+.
    • A file-based catalog image.
    • A catalog directory structure recently initialized on your workstation related to this catalog.

      If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.

Procedure

  1. Extract the contents of the catalog image in YAML format to an index.yaml file in your catalog directory:

    $ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \
        -o yaml > <catalog_dir>/index.yaml
    Note

    Alternatively, you can use the -o json flag to output in JSON format.

  2. Modify the contents of the resulting index.yaml file to your specifications:

    Important

    After a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.

    • To add an Operator, follow the steps for creating package, bundle, and channel entries in the "Creating a file-based catalog image" procedure.
    • To remove an Operator, delete the set of olm.package, olm.channel, and olm.bundle blobs that relate to the package. The following example shows a set that must be deleted to remove the example-operator package from the catalog:

      Example 4.9. Example removed entries

      ---
      defaultChannel: release-2.7
      icon:
        base64data: <base64_string>
        mediatype: image/svg+xml
      name: example-operator
      schema: olm.package
      ---
      entries:
      - name: example-operator.v2.7.0
        skipRange: '>=2.6.0 <2.7.0'
      - name: example-operator.v2.7.1
        replaces: example-operator.v2.7.0
        skipRange: '>=2.6.0 <2.7.1'
      - name: example-operator.v2.7.2
        replaces: example-operator.v2.7.1
        skipRange: '>=2.6.0 <2.7.2'
      - name: example-operator.v2.7.3
        replaces: example-operator.v2.7.2
        skipRange: '>=2.6.0 <2.7.3'
      - name: example-operator.v2.7.4
        replaces: example-operator.v2.7.3
        skipRange: '>=2.6.0 <2.7.4'
      name: release-2.7
      package: example-operator
      schema: olm.channel
      ---
      image: example.com/example-inc/example-operator-bundle@sha256:<digest>
      name: example-operator.v2.7.0
      package: example-operator
      properties:
      - type: olm.gvk
        value:
          group: example-group.example.io
          kind: MyObject
          version: v1alpha1
      - type: olm.gvk
        value:
          group: example-group.example.io
          kind: MyOtherObject
          version: v1beta1
      - type: olm.package
        value:
          packageName: example-operator
          version: 2.7.0
      - type: olm.bundle.object
        value:
          data: <base64_string>
      - type: olm.bundle.object
        value:
          data: <base64_string>
      relatedImages:
      - image: example.com/example-inc/example-related-image@sha256:<digest>
        name: example-related-image
      schema: olm.bundle
      ---
    • To add or update deprecation messages for an Operator, ensure there is a deprecations.yaml file in the same directory as the package’s index.yaml file. For information on the deprecations.yaml file format, see "olm.deprecations schema".
  3. Save your changes.
  4. Validate the catalog:

    $ opm validate <catalog_dir>
  5. Rebuild the catalog:

    $ podman build . \
        -f <catalog_dir>.Dockerfile \
        -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the updated catalog image to a registry:

    $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>

Verification

  1. In the web console, navigate to the OperatorHub configuration resource in the AdministrationCluster SettingsConfiguration page.
  2. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.

    For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.

  3. After the catalog source is in a READY state, navigate to the OperatorsOperatorHub page and check that the changes you made are reflected in the list of Operators.

4.7.3. SQLite-based catalogs

Important

The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in Red Hat OpenShift Service on AWS and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within Red Hat OpenShift Service on AWS, refer to the Deprecated and removed features section of the Red Hat OpenShift Service on AWS release notes.

4.7.3.1. Creating a SQLite-based index image

You can create an index image based on the SQLite database format by using the opm CLI.

Prerequisites

  • You have installed the opm CLI.
  • You have podman version 1.9.3+.
  • A bundle image is built and pushed to a registry that supports Docker v2-2.

Procedure

  1. Start a new index:

    $ opm index add \
        --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1
        --tag <registry>/<namespace>/<index_image_name>:<tag> \2
        [--binary-image <registry_base_image>] 3
    1
    Comma-separated list of bundle images to add to the index.
    2
    The image tag that you want the index image to have.
    3
    Optional: An alternative registry base image to use for serving the catalog.
  2. Push the index image to a registry.

    1. If required, authenticate with your target registry:

      $ podman login <registry>
    2. Push the index image:

      $ podman push <registry>/<namespace>/<index_image_name>:<tag>
4.7.3.2. Updating a SQLite-based index image

After configuring OperatorHub to use a catalog source that references a custom index image, administrators with the dedicated-admin role can keep the available Operators on their cluster up-to-date by adding bundle images to the index image.

You can update an existing index image using the opm index add command.

Prerequisites

  • You have installed the opm CLI.
  • You have podman version 1.9.3+.
  • An index image is built and pushed to a registry.
  • You have an existing catalog source referencing the index image.

Procedure

  1. Update the existing index by adding bundle images:

    $ opm index add \
        --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1
        --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2
        --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3
        --pull-tool podman 4
    1
    The --bundles flag specifies a comma-separated list of additional bundle images to add to the index.
    2
    The --from-index flag specifies the previously pushed index.
    3
    The --tag flag specifies the image tag to apply to the updated index image.
    4
    The --pull-tool flag specifies the tool used to pull container images.

    where:

    <registry>
    Specifies the hostname of the registry, such as quay.io or mirror.example.com.
    <namespace>
    Specifies the namespace of the registry, such as ocs-dev or abc.
    <new_bundle_image>
    Specifies the new bundle image to add to the registry, such as ocs-operator.
    <digest>
    Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41.
    <existing_index_image>
    Specifies the previously pushed image, such as abc-redhat-operator-index.
    <existing_tag>
    Specifies a previously pushed image tag, such as 4.
    <updated_tag>
    Specifies the image tag to apply to the updated index image, such as 4.1.

    Example command

    $ opm index add \
        --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \
        --from-index mirror.example.com/abc/abc-redhat-operator-index:4 \
        --tag mirror.example.com/abc/abc-redhat-operator-index:4.1 \
        --pull-tool podman

  2. Push the updated index image:

    $ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
  3. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:

    $ oc get packagemanifests -n openshift-marketplace
4.7.3.3. Filtering a SQLite-based index image

An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.

Prerequisites

  • You have podman version 1.9.3+.
  • You have grpcurl (third-party command-line tool).
  • You have installed the opm CLI.
  • You have access to a registry that supports Docker v2-2.

Procedure

  1. Authenticate with your target registry:

    $ podman login <target_registry>
  2. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  3. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4 4
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Required only for IBM Power® and IBM Z® images: Operator Registry base image with the tag that matches the target Red Hat OpenShift Service on AWS cluster major and minor version.
    4
    Custom tag for new index image being built.
  4. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4

    where <namespace> is any existing namespace on the registry.

4.7.4. Catalog sources and pod security admission

Pod security admission was introduced in Red Hat OpenShift Service on AWS 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the opm CLI tool released before Red Hat OpenShift Service on AWS 4.11 cannot run under restricted pod security enforcement.

In Red Hat OpenShift Service on AWS 4, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy.

Default restricted enforcement for all namespaces is planned for inclusion in a future Red Hat OpenShift Service on AWS release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.

Note

If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in Red Hat OpenShift Service on AWS 4.

However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future Red Hat OpenShift Service on AWS releases.

As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:

  • Migrate your catalog to the file-based catalog format.
  • Update your catalog image with a version of the opm CLI tool released with Red Hat OpenShift Service on AWS 4.11 or later.
Note

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of Red Hat OpenShift Service on AWS 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.

4.7.4.1. Migrating SQLite database catalogs to the file-based catalog format

You can update your deprecated SQLite database format catalogs to the file-based catalog format.

Prerequisites

  • You have a SQLite database catalog source.
  • You have access to the cluster as a user with the dedicated-admin role.
  • You have the latest version of the opm CLI tool released with Red Hat OpenShift Service on AWS 4 on your workstation.

Procedure

  1. Migrate your SQLite database catalog to a file-based catalog by running the following command:

    $ opm migrate <registry_image> <fbc_directory>
  2. Generate a Dockerfile for your file-based catalog by running the following command:

    $ opm generate dockerfile <fbc_directory> \
      --binary-image \
      registry.redhat.io/openshift4/ose-operator-registry:v4

Next steps

  • The generated Dockerfile can be built, tagged, and pushed to your registry.
4.7.4.2. Rebuilding SQLite database catalog images

You can rebuild your SQLite database catalog image with the latest version of the opm CLI tool that is released with your version of Red Hat OpenShift Service on AWS.

Prerequisites

  • You have a SQLite database catalog source.
  • You have access to the cluster as a user with the dedicated-admin role.
  • You have the latest version of the opm CLI tool released with Red Hat OpenShift Service on AWS 4 on your workstation.

Procedure

  • Run the following command to rebuild your catalog with a more recent version of the opm CLI tool:

    $ opm index add --binary-image \
      registry.redhat.io/openshift4/ose-operator-registry:v4 \
      --from-index <your_registry_image> \
      --bundles "" -t \<your_registry_image>
4.7.4.3. Configuring catalogs to run with elevated permissions

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:

  • Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.
  • Label the catalog source namespace for baseline or privileged pod security enforcement.
Note

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

Prerequisites

  • You have a SQLite database catalog source.
  • You have access to the cluster as a user with the dedicated-admin role.
  • You have a target namespace that supports running pods with the elevated pod security admission standard of baseline or privileged.

Procedure

  1. Edit the CatalogSource definition by setting the spec.grpcPodConfig.securityContextConfig label to legacy, as shown in the following example:

    Example CatalogSource definition

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: my-catsrc
      namespace: my-ns
    spec:
      sourceType: grpc
      grpcPodConfig:
        securityContextConfig: legacy
      image: my-image:latest

    Tip

    In Red Hat OpenShift Service on AWS 4, the spec.grpcPodConfig.securityContextConfig field is set to legacy by default. In a future release of Red Hat OpenShift Service on AWS, it is planned that the default setting will change to restricted. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field to legacy.

  2. Edit your <namespace>.yaml file to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:

    Example <namespace>.yaml file

    apiVersion: v1
    kind: Namespace
    metadata:
    ...
      labels:
        security.openshift.io/scc.podSecurityLabelSync: "false" 1
        openshift.io/cluster-monitoring: "true"
        pod-security.kubernetes.io/enforce: baseline 2
      name: "<namespace_name>"

    1
    Turn off pod security label synchronization by adding the security.openshift.io/scc.podSecurityLabelSync=false label to the namespace.
    2
    Apply the pod security admission pod-security.kubernetes.io/enforce label. Set the label to baseline or privileged. Use the baseline pod security profile unless other workloads in the namespace require a privileged profile.

4.7.5. Adding a catalog source to a cluster

Adding a catalog source to an Red Hat OpenShift Service on AWS cluster enables the discovery and installation of Operators for users. Administrators with the dedicated-admin role can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.

Tip

Alternatively, you can use the web console to manage catalog sources. From the HomeSearch page, select a project, click the Resources drop-down and search for CatalogSource. You can create, update, delete, disable, and enable individual sources.

Prerequisites

  • You built and pushed an index image to a registry.
  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogSource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace 1
        annotations:
          olm.catalogImageTemplate: 2
            "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}"
      spec:
        sourceType: grpc
        grpcPodConfig:
          securityContextConfig: <security_mode> 3
        image: <registry>/<namespace>/<index_image_name>:<tag> 4
        displayName: My Operator Catalog
        publisher: <publisher_name> 5
        updateStrategy:
          registryPoll: 6
            interval: 30m
      1
      If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace.
      2
      Optional: Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag.
      3
      Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future Red Hat OpenShift Service on AWS release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
      4
      Specify your index image. If you specify a tag after the image name, for example :v4, the catalog source pod uses an image pull policy of Always, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id>, the image pull policy is IfNotPresent, meaning the pod pulls the image only if it does not already exist on the node.
      5
      Specify your name or an organization name publishing the catalog.
      6
      Catalog sources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc apply -f catalogSource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your Red Hat OpenShift Service on AWS web console.

4.7.6. Removing custom catalogs

As an administrator with the dedicated-admin role, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  1. In the Administrator perspective of the web console, navigate to HomeSearch.
  2. Select a project from the Project: list.
  3. Select CatalogSource from the Resources list.
  4. Select the Options menu kebab for the catalog that you want to remove, and then click Delete CatalogSource.

4.8. Catalog source pod scheduling

When an Operator Lifecycle Manager (OLM) catalog source of source type grpc defines a spec.image, the Catalog Operator creates a pod that serves the defined image content. By default, this pod defines the following in its specification:

  • Only the kubernetes.io/os=linux node selector.
  • The default priority class name: system-cluster-critical.
  • No tolerations.

As an administrator, you can override these values by modifying fields in the CatalogSource object’s optional spec.grpcPodConfig section.

Important

The Marketplace Operator, openshift-marketplace, manages the default OperatorHub custom resource’s (CR). This CR manages CatalogSource objects. If you attempt to modify fields in the CatalogSource object’s spec.grpcPodConfig section, the Marketplace Operator automatically reverts these modifications.By default, if you modify fields in the spec.grpcPodConfig section of the CatalogSource object, the Marketplace Operator automatically reverts these changes.

To apply persistent changes to CatalogSource object, you must first disable a default CatalogSource object.

4.8.1. Disabling default CatalogSource objects at a local level

You can apply persistent changes to a CatalogSource object, such as catalog source pods, at a local level, by disabling a default CatalogSource object. Consider the default configuration in situations where the default CatalogSource object’s configuration does not meet your organization’s needs. By default, if you modify fields in the spec.grpcPodConfig section of the CatalogSource object, the Marketplace Operator automatically reverts these changes.

The Marketplace Operator, openshift-marketplace, manages the default custom resources (CRs) of the OperatorHub. The OperatorHub manages CatalogSource objects.

To apply persistent changes to CatalogSource object, you must first disable a default CatalogSource object.

Procedure

  • To disable all the default CatalogSource objects at a local level, enter the following command:

    $ oc patch operatorhub cluster -p '{"spec": {"disableAllDefaultSources": true}}' --type=merge
    Note

    You can also configure the default OperatorHub CR to either disable all CatalogSource objects or disable a specific object.

Additional resources

4.8.2. Overriding the node selector for catalog source pods

Prerequisites

  • A CatalogSource object of source type grpc with spec.image is defined.
  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  • Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following:

      grpcPodConfig:
        nodeSelector:
          custom_label: <label>

    where <label> is the label for the node selector that you want catalog source pods to use for scheduling.

4.8.3. Overriding the priority class name for catalog source pods

Prerequisites

  • A CatalogSource object of source type grpc with spec.image is defined.
  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  • Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following:

      grpcPodConfig:
        priorityClassName: <priority_class>

    where <priority_class> is one of the following:

    • One of the default priority classes provided by Kubernetes: system-cluster-critical or system-node-critical
    • An empty set ("") to assign the default priority
    • A pre-existing and custom defined priority class
Note

Previously, the only pod scheduling parameter that could be overriden was priorityClassName. This was done by adding the operatorframework.io/priorityclass annotation to the CatalogSource object. For example:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: example-catalog
  namespace: openshift-marketplace
  annotations:
    operatorframework.io/priorityclass: system-cluster-critical

If a CatalogSource object defines both the annotation and spec.grpcPodConfig.priorityClassName, the annotation takes precedence over the configuration parameter.

Additional resources

4.8.4. Overriding tolerations for catalog source pods

Prerequisites

  • A CatalogSource object of source type grpc with spec.image is defined.
  • You have access to the cluster as a user with the dedicated-admin role.

Procedure

  • Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following:

      grpcPodConfig:
        tolerations:
          - key: "<key_name>"
            operator: "<operator_type>"
            value: "<value>"
            effect: "<effect>"

4.9. Troubleshooting Operator issues

If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.

4.9.1. Operator subscription condition types

Subscriptions can report the following condition types:

Table 4.2. Subscription condition types
ConditionDescription

CatalogSourcesUnhealthy

Some or all of the catalog sources to be used in resolution are unhealthy.

InstallPlanMissing

An install plan for a subscription is missing.

InstallPlanPending

An install plan for a subscription is pending installation.

InstallPlanFailed

An install plan for a subscription has failed.

ResolutionFailed

The dependency resolution for a subscription has failed.

Note

Default Red Hat OpenShift Service on AWS cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

Additional resources

4.9.2. Viewing Operator subscription status by using the CLI

You can view Operator subscription status by using the CLI.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operator subscriptions:

    $ oc get subs -n <operator_namespace>
  2. Use the oc describe command to inspect a Subscription resource:

    $ oc describe sub <subscription_name> -n <operator_namespace>
  3. In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy:

    Example output

    Name:         cluster-logging
    Namespace:    openshift-logging
    Labels:       operators.coreos.com/cluster-logging.openshift-logging=
    Annotations:  <none>
    API Version:  operators.coreos.com/v1alpha1
    Kind:         Subscription
    # ...
    Conditions:
       Last Transition Time:  2019-07-29T13:42:57Z
       Message:               all available catalogsources are healthy
       Reason:                AllCatalogSourcesHealthy
       Status:                False
       Type:                  CatalogSourcesUnhealthy
    # ...

Note

Default Red Hat OpenShift Service on AWS cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

4.9.3. Viewing Operator catalog source status by using the CLI

You can view the status of an Operator catalog source by using the CLI.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources:

    $ oc get catalogsources -n openshift-marketplace

    Example output

    NAME                  DISPLAY               TYPE   PUBLISHER   AGE
    certified-operators   Certified Operators   grpc   Red Hat     55m
    community-operators   Community Operators   grpc   Red Hat     55m
    example-catalog       Example Catalog       grpc   Example Org 2m25s
    redhat-marketplace    Red Hat Marketplace   grpc   Red Hat     55m
    redhat-operators      Red Hat Operators     grpc   Red Hat     55m

  2. Use the oc describe command to get more details and status about a catalog source:

    $ oc describe catalogsource example-catalog -n openshift-marketplace

    Example output

    Name:         example-catalog
    Namespace:    openshift-marketplace
    Labels:       <none>
    Annotations:  operatorframework.io/managed-by: marketplace-operator
                  target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"}
    API Version:  operators.coreos.com/v1alpha1
    Kind:         CatalogSource
    # ...
    Status:
      Connection State:
        Address:              example-catalog.openshift-marketplace.svc:50051
        Last Connect:         2021-09-09T17:07:35Z
        Last Observed State:  TRANSIENT_FAILURE
      Registry Service:
        Created At:         2021-09-09T17:05:45Z
        Port:               50051
        Protocol:           grpc
        Service Name:       example-catalog
        Service Namespace:  openshift-marketplace
    # ...

    In the preceding example output, the last observed state is TRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source.

  3. List the pods in the namespace where your catalog source was created:

    $ oc get pods -n openshift-marketplace

    Example output

    NAME                                    READY   STATUS             RESTARTS   AGE
    certified-operators-cv9nn               1/1     Running            0          36m
    community-operators-6v8lp               1/1     Running            0          36m
    marketplace-operator-86bfc75f9b-jkgbc   1/1     Running            0          42m
    example-catalog-bwt8z                   0/1     ImagePullBackOff   0          3m55s
    redhat-marketplace-57p8c                1/1     Running            0          36m
    redhat-operators-smxx8                  1/1     Running            0          36m

    When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image.

  4. Use the oc describe command to inspect a pod for more detailed information:

    $ oc describe pod example-catalog-bwt8z -n openshift-marketplace

    Example output

    Name:         example-catalog-bwt8z
    Namespace:    openshift-marketplace
    Priority:     0
    Node:         ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2
    ...
    Events:
      Type     Reason          Age                From               Message
      ----     ------          ----               ----               -------
      Normal   Scheduled       48s                default-scheduler  Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd
      Normal   AddedInterface  47s                multus             Add eth0 [10.131.0.40/23] from openshift-sdn
      Normal   BackOff         20s (x2 over 46s)  kubelet            Back-off pulling image "quay.io/example-org/example-catalog:v1"
      Warning  Failed          20s (x2 over 46s)  kubelet            Error: ImagePullBackOff
      Normal   Pulling         8s (x3 over 47s)   kubelet            Pulling image "quay.io/example-org/example-catalog:v1"
      Warning  Failed          8s (x3 over 47s)   kubelet            Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized
      Warning  Failed          8s (x3 over 47s)   kubelet            Error: ErrImagePull

    In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.

Additional resources

4.9.4. Querying Operator pod status

You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operators running in the cluster. The output includes Operator version, availability, and up-time information:

    $ oc get clusteroperators
  2. List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:

    $ oc get pod -n <operator_namespace>
  3. Output a detailed Operator pod summary:

    $ oc describe pod <operator_pod_name> -n <operator_namespace>

4.9.5. Gathering Operator logs

If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (oc).
  • You have the fully qualified domain names of the control plane or control plane machines.

Procedure

  1. List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:

    $ oc get pods -n <operator_namespace>
  2. Review logs for an Operator pod:

    $ oc logs pod/<pod_name> -n <operator_namespace>

    If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:

    $ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
  3. If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values.

    1. List pods on each control plane node:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
    2. For any Operator pods not showing a Ready status, inspect the pod’s status in detail. Replace <operator_pod_id> with the Operator pod’s ID listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
    3. List containers related to an Operator pod:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
    4. For any Operator container not showing a Ready status, inspect the container’s status in detail. Replace <container_id> with a container ID listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
    5. Review the logs for any Operator containers not showing a Ready status. Replace <container_id> with a container ID listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
      Note

      Red Hat OpenShift Service on AWS 4 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the Red Hat OpenShift Service on AWS API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

Chapter 5. Developing Operators

5.1. About the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.

Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.

The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

Why use the Operator SDK?

The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.

The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:

  • High-level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to quickly bootstrap a new project
  • Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
  • Extensions to cover common Operator use cases
  • Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed

Operator authors with dedicated-admin access to Red Hat OpenShift Service on AWS can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

Note

Red Hat OpenShift Service on AWS 4 supports Operator SDK 1.36.1.

5.1.1. What are Operators?

For an overview about basic Operator concepts and terminology, see Understanding Operators.

5.1.2. Development workflow

The Operator SDK provides the following workflow to develop a new Operator:

  1. Create an Operator project by using the Operator SDK command-line interface (CLI).
  2. Define new resource APIs by adding custom resource definitions (CRDs).
  3. Specify resources to watch by using the Operator SDK API.
  4. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
  5. Use the Operator SDK CLI to build and generate the Operator deployment manifests.

Figure 5.1. Operator SDK workflow

osdk workflow

At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.

5.1.3. Additional resources

5.2. Installing the Operator SDK CLI

The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

Operator authors with dedicated-admin access to Red Hat OpenShift Service on AWS can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

Note

Red Hat OpenShift Service on AWS 4 supports Operator SDK 1.36.1.

5.2.1. Installing the Operator SDK CLI on Linux

You can install the OpenShift SDK CLI tool on Linux.

Prerequisites

  • Go v1.19+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+

Procedure

  1. Navigate to the OpenShift mirror site.
  2. From the latest 4 directory, download the latest version of the tarball for Linux.
  3. Unpack the archive:

    $ tar xvf operator-sdk-v1.36.1-ocp-linux-x86_64.tar.gz
  4. Make the file executable:

    $ chmod +x operator-sdk
  5. Move the extracted operator-sdk binary to a directory that is on your PATH.

    Tip

    To check your PATH:

    $ echo $PATH
    $ sudo mv ./operator-sdk /usr/local/bin/operator-sdk

Verification

  • After you install the Operator SDK CLI, verify that it is available:

    $ operator-sdk version

    Example output

    operator-sdk version: "v1.36.1-ocp", ...

5.2.2. Installing the Operator SDK CLI on macOS

You can install the OpenShift SDK CLI tool on macOS.

Prerequisites

  • Go v1.19+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+

Procedure

  1. For the amd64 architecture, navigate to the OpenShift mirror site for the amd64 architecture.
  2. From the latest 4 directory, download the latest version of the tarball for macOS.
  3. Unpack the Operator SDK archive for amd64 architecture by running the following command:

    $ tar xvf operator-sdk-v1.36.1-ocp-darwin-x86_64.tar.gz
  4. Make the file executable by running the following command:

    $ chmod +x operator-sdk
  5. Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command:

    Tip

    Check your PATH by running the following command:

    $ echo $PATH
    $ sudo mv ./operator-sdk /usr/local/bin/operator-sdk

Verification

  • After you install the Operator SDK CLI, verify that it is available by running the following command::

    $ operator-sdk version

    Example output

    operator-sdk version: "v1.36.1-ocp", ...

5.3. Go-based Operators

5.3.1. Operator SDK tutorial for Go-based Operators

Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators in the OpenShift Container Platform documentation.

5.3.1.1. Prerequisites
  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4+ installed
  • Go 1.21+
  • Logged into an Red Hat OpenShift Service on AWS cluster with oc with an account that has dedicated-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.3.1.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/memcached-operator
  2. Change to the directory:

    $ cd $HOME/projects/memcached-operator
  3. Activate support for Go modules:

    $ export GO111MODULE=on
  4. Run the operator-sdk init command to initialize the project:

    $ operator-sdk init \
        --domain=example.com \
        --repo=github.com/example-inc/memcached-operator
    Note

    The operator-sdk init command uses the Go plugin by default.

    The operator-sdk init command generates a go.mod file to be used with Go modules. The --repo flag is required when creating a project outside of $GOPATH/src/, because generated files require a valid module path.

5.3.1.2.1. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example:

domain: example.com
layout:
- go.kubebuilder.io/v3
projectName: memcached-operator
repo: github.com/example-inc/memcached-operator
version: "3"
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
5.3.1.2.2. About the Manager

The main program for the Operator is the main.go file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.

The Manager can restrict the namespace that all controllers watch for resources:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})

By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})

You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:

var namespaces []string 1
mgr, err := ctrl.NewManager(cfg, manager.Options{ 2
   NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
1
List of namespaces.
2
Creates a Cmd struct to provide shared dependencies and start components.
5.3.1.2.3. About multi-group APIs

Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:

$ operator-sdk edit --multigroup=true

This command updates the PROJECT file, which should look like the following example:

domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...

For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly.

Additional resource

5.3.1.3. Creating an API and controller

Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.

Procedure

  1. Run the following command to create an API with group cache, version, v1, and kind Memcached:

    $ operator-sdk create api \
        --group=cache \
        --version=v1 \
        --kind=Memcached
  2. When prompted, enter y for creating both the resource and controller:

    Create Resource [y/n]
    y
    Create Controller [y/n]
    y

    Example output

    Writing scaffold for you to edit...
    api/v1/memcached_types.go
    controllers/memcached_controller.go
    ...

This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go.

5.3.1.3.1. Defining the API

Define the API for the Memcached custom resource (CR).

Procedure

  1. Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status:

    // MemcachedSpec defines the desired state of Memcached
    type MemcachedSpec struct {
    	// +kubebuilder:validation:Minimum=0
    	// Size is the size of the memcached deployment
    	Size int32 `json:"size"`
    }
    
    // MemcachedStatus defines the observed state of Memcached
    type MemcachedStatus struct {
    	// Nodes are the names of the memcached pods
    	Nodes []string `json:"nodes"`
    }
  2. Update the generated code for the resource type:

    $ make generate
    Tip

    After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type.

    The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement.

5.3.1.3.2. Generating CRD manifests

After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.

Procedure

  • Run the following command to generate and update CRD manifests:

    $ make manifests

    This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file.

5.3.1.3.2.1. About OpenAPI validation

OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.

Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix.

Additional resources

5.3.1.4. Implementing the controller

After creating a new API and controller, you can implement the controller logic.

Procedure

  • For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation:

    Example 5.1. Example memcached_controller.go

    /*
    Copyright 2020.
    
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
    */
    
    package controllers
    
    import (
            appsv1 "k8s.io/api/apps/v1"
            corev1 "k8s.io/api/core/v1"
            "k8s.io/apimachinery/pkg/api/errors"
            metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
            "k8s.io/apimachinery/pkg/types"
            "reflect"
    
            "context"
    
            "github.com/go-logr/logr"
            "k8s.io/apimachinery/pkg/runtime"
            ctrl "sigs.k8s.io/controller-runtime"
            "sigs.k8s.io/controller-runtime/pkg/client"
            ctrllog "sigs.k8s.io/controller-runtime/pkg/log"
    
            cachev1 "github.com/example-inc/memcached-operator/api/v1"
    )
    
    // MemcachedReconciler reconciles a Memcached object
    type MemcachedReconciler struct {
            client.Client
            Log    logr.Logger
            Scheme *runtime.Scheme
    }
    
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
    // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
    
    // Reconcile is part of the main kubernetes reconciliation loop which aims to
    // move the current state of the cluster closer to the desired state.
    // TODO(user): Modify the Reconcile function to compare the state specified by
    // the Memcached object against the actual cluster state, and then
    // perform operations to make the cluster state reflect the state specified by
    // the user.
    //
    // For more details, check Reconcile and its Result here:
    // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile
    func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
            //log := r.Log.WithValues("memcached", req.NamespacedName)
            log := ctrllog.FromContext(ctx)
            // Fetch the Memcached instance
            memcached := &cachev1.Memcached{}
            err := r.Get(ctx, req.NamespacedName, memcached)
            if err != nil {
                    if errors.IsNotFound(err) {
                            // Request object not found, could have been deleted after reconcile request.
                            // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
                            // Return and don't requeue
                            log.Info("Memcached resource not found. Ignoring since object must be deleted")
                            return ctrl.Result{}, nil
                    }
                    // Error reading the object - requeue the request.
                    log.Error(err, "Failed to get Memcached")
                    return ctrl.Result{}, err
            }
    
            // Check if the deployment already exists, if not create a new one
            found := &appsv1.Deployment{}
            err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
            if err != nil && errors.IsNotFound(err) {
                    // Define a new deployment
                    dep := r.deploymentForMemcached(memcached)
                    log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
                    err = r.Create(ctx, dep)
                    if err != nil {
                            log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
                            return ctrl.Result{}, err
                    }
                    // Deployment created successfully - return and requeue
                    return ctrl.Result{Requeue: true}, nil
            } else if err != nil {
                    log.Error(err, "Failed to get Deployment")
                    return ctrl.Result{}, err
            }
    
            // Ensure the deployment size is the same as the spec
            size := memcached.Spec.Size
            if *found.Spec.Replicas != size {
                    found.Spec.Replicas = &size
                    err = r.Update(ctx, found)
                    if err != nil {
                            log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
                            return ctrl.Result{}, err
                    }
                    // Spec updated - return and requeue
                    return ctrl.Result{Requeue: true}, nil
            }
    
            // Update the Memcached status with the pod names
            // List the pods for this memcached's deployment
            podList := &corev1.PodList{}
            listOpts := []client.ListOption{
                    client.InNamespace(memcached.Namespace),
                    client.MatchingLabels(labelsForMemcached(memcached.Name)),
            }
            if err = r.List(ctx, podList, listOpts...); err != nil {
                    log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
                    return ctrl.Result{}, err
            }
            podNames := getPodNames(podList.Items)
    
            // Update status.Nodes if needed
            if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
                    memcached.Status.Nodes = podNames
                    err := r.Status().Update(ctx, memcached)
                    if err != nil {
                            log.Error(err, "Failed to update Memcached status")
                            return ctrl.Result{}, err
                    }
            }
    
            return ctrl.Result{}, nil
    }
    
    // deploymentForMemcached returns a memcached Deployment object
    func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment {
            ls := labelsForMemcached(m.Name)
            replicas := m.Spec.Size
    
            dep := &appsv1.Deployment{
                    ObjectMeta: metav1.ObjectMeta{
                            Name:      m.Name,
                            Namespace: m.Namespace,
                    },
                    Spec: appsv1.DeploymentSpec{
                            Replicas: &replicas,
                            Selector: &metav1.LabelSelector{
                                    MatchLabels: ls,
                            },
                            Template: corev1.PodTemplateSpec{
                                    ObjectMeta: metav1.ObjectMeta{
                                            Labels: ls,
                                    },
                                    Spec: corev1.PodSpec{
                                            Containers: []corev1.Container{{
                                                    Image:   "memcached:1.4.36-alpine",
                                                    Name:    "memcached",
                                                    Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
                                                    Ports: []corev1.ContainerPort{{
                                                            ContainerPort: 11211,
                                                            Name:          "memcached",
                                                    }},
                                            }},
                                    },
                            },
                    },
            }
            // Set Memcached instance as the owner and controller
            ctrl.SetControllerReference(m, dep, r.Scheme)
            return dep
    }
    
    // labelsForMemcached returns the labels for selecting the resources
    // belonging to the given memcached CR name.
    func labelsForMemcached(name string) map[string]string {
            return map[string]string{"app": "memcached", "memcached_cr": name}
    }
    
    // getPodNames returns the pod names of the array of pods passed in
    func getPodNames(pods []corev1.Pod) []string {
            var podNames []string
            for _, pod := range pods {
                    podNames = append(podNames, pod.Name)
            }
            return podNames
    }
    
    // SetupWithManager sets up the controller with the Manager.
    func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
            return ctrl.NewControllerManagedBy(mgr).
                    For(&cachev1.Memcached{}).
                    Owns(&appsv1.Deployment{}).
                    Complete(r)
    }

    The example controller runs the following reconciliation logic for each Memcached custom resource (CR):

    • Create a Memcached deployment if it does not exist.
    • Ensure that the deployment size is the same as specified by the Memcached CR spec.
    • Update the Memcached CR status with the names of the memcached pods.

The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.

5.3.1.4.1. Resources watched by the controller

The SetupWithManager() function in controllers/memcached_controller.go specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.

import (
	...
	appsv1 "k8s.io/api/apps/v1"
	...
)

func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&cachev1.Memcached{}).
		Owns(&appsv1.Deployment{}).
		Complete(r)
}

NewControllerManagedBy() provides a controller builder that allows various controller configurations.

For(&cachev1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached type, the reconcile loop is sent a reconcile Request argument, which consists of a namespace and name key, for that Memcached object.

Owns(&appsv1.Deployment{}) specifies the Deployment type as the secondary resource to watch. For each Deployment type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached object for which the deployment was created.

5.3.1.4.2. Controller configurations

You can initialize a controller by using many other useful configurations. For example:

  • Set the maximum number of concurrent reconciles for the controller by using the MaxConcurrentReconciles option, which defaults to 1:

    func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
        return ctrl.NewControllerManagedBy(mgr).
            For(&cachev1.Memcached{}).
            Owns(&appsv1.Deployment{}).
            WithOptions(controller.Options{
                MaxConcurrentReconciles: 2,
            }).
            Complete(r)
    }
  • Filter watch events using predicates.
  • Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the EnqueueRequestsFromMapFunc handler to transform a watch event into an arbitrary set of reconcile requests.

For more details on these and other configurations, see the upstream Builder and Controller GoDocs.

5.3.1.4.3. Reconcile loop

Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument, which is a namespace and name key used to find the primary resource object, Memcached, from the cache:

import (
	ctrl "sigs.k8s.io/controller-runtime"

	cachev1 "github.com/example-inc/memcached-operator/api/v1"
	...
)

func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  // Lookup the Memcached instance for this reconcile request
  memcached := &cachev1.Memcached{}
  err := r.Get(ctx, req.NamespacedName, memcached)
  ...
}

Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:

// Reconcile successful - don't requeue
return ctrl.Result{}, nil
// Reconcile failed due to error - requeue
return ctrl.Result{}, err
// Requeue for any reason other than an error
return ctrl.Result{Requeue: true}, nil

You can set the Result.RequeueAfter to requeue the request after a grace period as well:

import "time"

// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
Note

You can return Result with RequeueAfter set to periodically reconcile a CR.

For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.

5.3.1.4.4. Permissions and RBAC manifests

The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:

// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;

func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  ...
}

The ClusterRole object manifest at config/rbac/role.yaml is generated from the previous markers by using the controller-gen utility whenever the make manifests command is run.

5.3.1.5. Enabling proxy support

Operator authors can develop Operators that support network proxies. Administrators with the dedicated-admin role configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Edit the controllers/memcached_controller.go file to include the following:

    1. Import the proxy package from the operator-lib library:

      import (
        ...
         "github.com/operator-framework/operator-lib/proxy"
      )
    2. Add the proxy.ReadProxyVarsFromEnv helper function to the reconcile loop and append the results to the Operand environments:

      for i, container := range dep.Spec.Template.Spec.Containers {
      		dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...)
      }
      ...
  2. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"
5.3.1.6. Running the Operator

To build and run your Operator, use the Operator SDK CLI to bundle your Operator, and then use Operator Lifecycle Manager (OLM) to deploy on the cluster.

Note

If you wish to deploy your Operator on an OpenShift Container Platform cluster instead of a Red Hat OpenShift Service on AWS cluster, two additional deployment options are available:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.
Note

Before running your Go-based Operator as a bundle that uses OLM, ensure that your project has been updated to use supported images.

Additional resources

5.3.1.6.1. Bundling an Operator and deploying with Operator Lifecycle Manager
5.3.1.6.1.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4+ installed
  • Operator project initialized by using the Operator SDK
  • If your Operator is Go-based, your project must be updated to use supported images for running on Red Hat OpenShift Service on AWS

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.3.1.6.1.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on Red Hat OpenShift Service on AWS and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example Red Hat OpenShift Service on AWS 4)
  • Logged in to the cluster with oc using an account with dedicated-admin permissions
  • If your Operator is Go-based, your project must be updated to use supported images for running on Red Hat OpenShift Service on AWS

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of Red Hat OpenShift Service on AWS 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.
5.3.1.7. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Memcached Operator, which provides the Memcached CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project memcached-operator-system
  2. Edit the sample Memcached CR manifest at config/samples/cache_v1_memcached.yaml to contain the following specification:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    ...
    spec:
    ...
      size: 3
  3. Create the CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    memcached-operator-controller-manager   1/1     1            1           8m
    memcached-sample                        3/3     3            3           1m

  5. Check the pods and CR status to confirm the status is updated with the Memcached pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          1m

    2. Check the CR status:

      $ oc get memcached/memcached-sample -o yaml

      Example output

      apiVersion: cache.example.com/v1
      kind: Memcached
      metadata:
      ...
        name: memcached-sample
      ...
      spec:
        size: 3
      status:
        nodes:
        - memcached-sample-6fd7c98d8-7dqdr
        - memcached-sample-6fd7c98d8-g5k7v
        - memcached-sample-6fd7c98d8-m7vn7

  6. Update the deployment size.

    1. Update config/samples/cache_v1_memcached.yaml file to change the spec.size field in the Memcached CR from 3 to 5:

      $ oc patch memcached memcached-sample \
          -p '{"spec":{"size": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator-controller-manager   1/1     1            1           10m
      memcached-sample                        5/5     5            5           3m

  7. Delete the CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml
  8. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>
5.3.1.8. Additional resources

5.3.2. Project layout for Go-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.3.2.1. Go-based project layout

Go-based Operator projects, the default type, generated using the operator-sdk init command contain the following files and directories:

File or directoryPurpose

main.go

Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the apis/ directory and starts all controllers in the controllers/ directory.

apis/

Directory tree that defines the APIs of the CRDs. You must edit the apis/<version>/<kind>_types.go files to define the API for each resource type and import these packages in your controllers to watch for these resource types.

controllers/

Controller implementations. Edit the controller/<kind>_controller.go files to define the reconcile logic of the controller for handling a resource type of the specified kind.

config/

Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates.

Makefile

Targets used to build and deploy your controller.

Dockerfile

Instructions used by a container engine to build your Operator.

manifests/

Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment.

5.3.3. Updating Go-based Operator projects for newer Operator SDK versions

Red Hat OpenShift Service on AWS 4 supports Operator SDK 1.36.1. If you already have the 1.31.0 CLI installed on your workstation, you can update the CLI to 1.36.1 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.36.1, update steps are required for the associated breaking changes introduced since 1.31.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.31.0.

5.3.3.1. Updating Go-based Operator projects for Operator SDK 1.36.1

The following procedure updates an existing Go-based Operator project for compatibility with 1.36.1.

Prerequisites

  • Operator SDK 1.36.1 installed
  • An Operator project created or maintained with Operator SDK 1.31.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.36.1-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.36.1-ocp

  2. Update the kube-rbac-proxy container to use the Red Hat Enterprise Linux (RHEL) 9-based image:

    1. Find the entry for the kube-rbac-proxy container in the following files:

      • config/default/manager_auth_proxy_patch.yaml
      • bundle/manifests/<operator_name>.clusterserviceversion.yaml for your Operator project, for example memcached-operator.clusterserviceversion.yaml from the tutorials
    2. Update the image name in the pull spec from ose-kube-rbac-proxy to ose-kube-rbac-proxy-rhel9, and update the tag to v4:

      Example ose-kube-rbac-proxy-rhel9 pull spec with v4 image tag

      # ...
            containers:
            - name: kube-rbac-proxy
              image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4
      # ...

  3. The go/v4 plugin is now stable and is the default version used when scaffolding a Go-based Operator. The transition from Golang v2 and v3 plugins to the new Golang v4 plugin introduces significant changes. This migration is designed to enhance your project’s functionality and compatibility, reflecting the evolving landscape of Golang development.

    For more information on the reasoning behind these changes, see go/v3 vs go/v4 in the Kubebuilder documentation.

    For a comprehensive understanding of the migration process to the v4 plugin format and detailed migration steps, see Migration from go/v3 to go/v4 by updating the files manually in the Kubebuilder documentation.

  4. The kustomize/v2 plugin is now stable and is the default version used in the plugin chain when using go/v4, ansible/v1, helm/v1, and hybrid/v1-alpha plugins. For more information on this default scaffold, see Kustomize v2 in the Kubebuilder documentation.
  5. If your Operator project uses a multi-platform, or multi-archicture, build, replace the existing docker-buildx target with following definition in your project Makefile:

    Example Makefile

    docker-buildx:
    ## Build and push the Docker image for the manager for multi-platform support
    	- docker buildx create --name project-v3-builder
    	docker buildx use project-v3-builder
    	- docker buildx build --push --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile .
    	- docker buildx rm project-v3-builder

  6. You must upgrade the Kubernetes versions in your Operator project to use 1.29. The following changes must be made in your project structure, Makefile, and go.mod files.

    Important

    The go/v3 plugin is being deprecated by Kubebuilder, therefore Operator SDK is also migrating to go/v4 in a future release.

    1. Update your go.mod file to upgrade your dependencies:

      k8s.io/api v0.29.2
      k8s.io/apimachinery v0.29.2
      k8s.io/client-go v0.29.2
      sigs.k8s.io/controller-runtime v0.17.3
    2. Download the upgraded dependencies by running the following command:

      $ go mod tidy
    3. You can now generate a file that contains all the resources built with Kustomize, which are necessary to install this project without its dependencies. Update your Makefile by making the following changes:

      + .PHONY: build-installer
      +   build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment.
      +   	mkdir -p dist
      +   	cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
      +   	$(KUSTOMIZE) build config/default > dist/install.yaml
    4. Update the ENVTEST_K8S_VERSION variable in your Makefile by making the following changes:

      - ENVTEST_K8S_VERSION = 1.28.3
      + ENVTEST_K8S_VERSION = 1.29.0
    5. Remove the following section from your Makefile:

      - GOLANGCI_LINT = $(shell pwd)/bin/golangci-lint
      - GOLANGCI_LINT_VERSION ?= v1.54.2
      - golangci-lint:
      - 	@[ -f $(GOLANGCI_LINT) ] || { \
      - 	set -e ;\
      - 	curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell dirname $(GOLANGCI_LINT)) $(GOLANGCI_LINT_VERSION) ;\
      - 	}
    6. Update your Makefile by making the following changes:

      Example 5.2. Makefile changes

      - ## Tool Binaries
      - KUBECTL ?= kubectl
      - KUSTOMIZE ?= $(LOCALBIN)/kustomize
      - CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
      - ENVTEST ?= $(LOCALBIN)/setup-envtest
      -
      - ## Tool Versions
      - KUSTOMIZE_VERSION ?= v5.2.1
      - CONTROLLER_TOOLS_VERSION ?= v0.13.0
      -
      - .PHONY: kustomize
      - kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading.
      - $(KUSTOMIZE): $(LOCALBIN)
      -   @if test -x $(LOCALBIN)/kustomize && ! $(LOCALBIN)/kustomize version | grep -q $(KUSTOMIZE_VERSION); then \
      -   echo "$(LOCALBIN)/kustomize version is not expected $(KUSTOMIZE_VERSION). Removing it before installing."; \
      -   rm -rf $(LOCALBIN)/kustomize; \
      -   fi
      -   test -s $(LOCALBIN)/kustomize || GOBIN=$(LOCALBIN) GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v5@$(KUSTOMIZE_VERSION)
      -
      - .PHONY: controller-gen
      - controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten.
      - $(CONTROLLER_GEN): $(LOCALBIN)
      -   test -s $(LOCALBIN)/controller-gen && $(LOCALBIN)/controller-gen --version | grep -q $(CONTROLLER_TOOLS_VERSION) || \
      -   GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)
      -
      - .PHONY: envtest
      - envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
      - $(ENVTEST): $(LOCALBIN)
      -   test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
      + ## Tool Binaries
      + KUBECTL ?= kubectl
      + KUSTOMIZE ?= $(LOCALBIN)/kustomize-$(KUSTOMIZE_VERSION)
      + CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen-$(CONTROLLER_TOOLS_VERSION)
      + ENVTEST ?= $(LOCALBIN)/setup-envtest-$(ENVTEST_VERSION)
      + GOLANGCI_LINT = $(LOCALBIN)/golangci-lint-$(GOLANGCI_LINT_VERSION)
      +
      + ## Tool Versions
      + KUSTOMIZE_VERSION ?= v5.3.0
      + CONTROLLER_TOOLS_VERSION ?= v0.14.0
      + ENVTEST_VERSION ?= release-0.17
      + GOLANGCI_LINT_VERSION ?= v1.57.2
      +
      + .PHONY: kustomize
      + kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
      + $(KUSTOMIZE): $(LOCALBIN)
      + 	$(call go-install-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,$(KUSTOMIZE_VERSION))
      +
      + .PHONY: controller-gen
      + controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
      + $(CONTROLLER_GEN): $(LOCALBIN)
      + 	$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION))
      +
      + .PHONY: envtest
      + envtest: $(ENVTEST) ## Download setup-envtest locally if necessary.
      + $(ENVTEST): $(LOCALBIN)
      + 	$(call go-install-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,$(ENVTEST_VERSION))
      +
      + .PHONY: golangci-lint
      + golangci-lint: $(GOLANGCI_LINT) ## Download golangci-lint locally if necessary.
      + $(GOLANGCI_LINT): $(LOCALBIN)
      + 	$(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,${GOLANGCI_LINT_VERSION})
      +
      + # go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist
      + # $1 - target path with name of binary (ideally with version)
      + # $2 - package url which can be installed
      + # $3 - specific version of package
      + define go-install-tool
      + @[ -f $(1) ] || { \
      + set -e; \
      + package=$(2)@$(3) ;\
      + echo "Downloading $${package}" ;\
      + GOBIN=$(LOCALBIN) go install $${package} ;\
      + mv "$$(echo "$(1)" | sed "s/-$(3)$$//")" $(1) ;\
      + }
      + endef
5.3.3.2. Additional resources

5.4. Ansible-based Operators

5.4.1. Operator SDK tutorial for Ansible-based Operators

Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:

  • Create a Memcached deployment
  • Ensure that the deployment size is the same as specified by the Memcached custom resource (CR) spec
  • Update the Memcached CR status using the status writer with the names of the memcached pods
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished by using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators in the OpenShift Container Platform documentation.

5.4.1.1. Prerequisites
5.4.1.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/memcached-operator
  2. Change to the directory:

    $ cd $HOME/projects/memcached-operator
  3. Run the operator-sdk init command with the ansible plugin to initialize the project:

    $ operator-sdk init \
        --plugins=ansible \
        --domain=example.com
5.4.1.2.1. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Ansible. For example:

domain: example.com
layout:
- ansible.sdk.operatorframework.io/v1
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
projectName: memcached-operator
version: "3"
5.4.1.3. Creating an API

Use the Operator SDK CLI to create a Memcached API.

Procedure

  • Run the following command to create an API with group cache, version, v1, and kind Memcached:

    $ operator-sdk create api \
        --group cache \
        --version v1 \
        --kind Memcached \
        --generate-role 1
    1
    Generates an Ansible role for the API.

After creating the API, your Operator project updates with the following structure:

Memcached CRD
Includes a sample Memcached resource
Manager

Program that reconciles the state of the cluster to the desired state by using:

  • A reconciler, either an Ansible role or playbook
  • A watches.yaml file, which connects the Memcached resource to the memcached Ansible role
5.4.1.4. Modifying the manager

Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a Memcached resource is created, updated, or deleted.

Procedure

  1. Update the roles/memcached/tasks/main.yml file with the following structure:

    ---
    - name: start memcached
      k8s:
        definition:
          kind: Deployment
          apiVersion: apps/v1
          metadata:
            name: '{{ ansible_operator_meta.name }}-memcached'
            namespace: '{{ ansible_operator_meta.namespace }}'
          spec:
            replicas: "{{size}}"
            selector:
              matchLabels:
                app: memcached
            template:
              metadata:
                labels:
                  app: memcached
              spec:
                containers:
                - name: memcached
                  command:
                  - memcached
                  - -m=64
                  - -o
                  - modern
                  - -v
                  image: "docker.io/memcached:1.4.36-alpine"
                  ports:
                    - containerPort: 11211

    This memcached role ensures a memcached deployment exist and sets the deployment size.

  2. Set default values for variables used in your Ansible role by editing the roles/memcached/defaults/main.yml file:

    ---
    # defaults file for Memcached
    size: 1
  3. Update the Memcached sample resource in the config/samples/cache_v1_memcached.yaml file with the following structure:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      labels:
        app.kubernetes.io/name: memcached
        app.kubernetes.io/instance: memcached-sample
        app.kubernetes.io/part-of: memcached-operator
        app.kubernetes.io/managed-by: kustomize
        app.kubernetes.io/created-by: memcached-operator
      name: memcached-sample
    spec:
      size: 3

    The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.

Note

The names of all variables in the spec field are converted to snake case, meaning lowercase with an underscore, by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

You can disable this case conversion by setting the snakeCaseParameters option to false in your watches.yaml file. It is recommended that you perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

5.4.1.5. Enabling proxy support

Operator authors can develop Operators that support network proxies. Administrators with the dedicated-admin role configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Add the environment variables to the deployment by updating the roles/memcached/tasks/main.yml file with the following:

    ...
    env:
       - name: HTTP_PROXY
         value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}'
       - name: http_proxy
         value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}'
    ...
  2. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"
5.4.1.6. Running the Operator

To build and run your Operator, use the Operator SDK CLI to bundle your Operator, and then use Operator Lifecycle Manager (OLM) to deploy on the cluster.

Note

If you wish to deploy your Operator on an OpenShift Container Platform cluster instead of a Red Hat OpenShift Service on AWS cluster, two additional deployment options are available:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.

Additional resources

5.4.1.6.1. Bundling an Operator and deploying with Operator Lifecycle Manager
5.4.1.6.1.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4+ installed
  • Operator project initialized by using the Operator SDK

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.4.1.6.1.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on Red Hat OpenShift Service on AWS and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example Red Hat OpenShift Service on AWS 4)
  • Logged in to the cluster with oc using an account with dedicated-admin permissions

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of Red Hat OpenShift Service on AWS 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.
5.4.1.7. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Memcached Operator, which provides the Memcached CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project memcached-operator-system
  2. Edit the sample Memcached CR manifest at config/samples/cache_v1_memcached.yaml to contain the following specification:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    ...
    spec:
    ...
      size: 3
  3. Create the CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    memcached-operator-controller-manager   1/1     1            1           8m
    memcached-sample                        3/3     3            3           1m

  5. Check the pods and CR status to confirm the status is updated with the Memcached pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          1m

    2. Check the CR status:

      $ oc get memcached/memcached-sample -o yaml

      Example output

      apiVersion: cache.example.com/v1
      kind: Memcached
      metadata:
      ...
        name: memcached-sample
      ...
      spec:
        size: 3
      status:
        nodes:
        - memcached-sample-6fd7c98d8-7dqdr
        - memcached-sample-6fd7c98d8-g5k7v
        - memcached-sample-6fd7c98d8-m7vn7

  6. Update the deployment size.

    1. Update config/samples/cache_v1_memcached.yaml file to change the spec.size field in the Memcached CR from 3 to 5:

      $ oc patch memcached memcached-sample \
          -p '{"spec":{"size": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator-controller-manager   1/1     1            1           10m
      memcached-sample                        5/5     5            5           3m

  7. Delete the CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml
  8. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>
5.4.1.8. Additional resources

5.4.2. Project layout for Ansible-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.2.1. Ansible-based project layout

Ansible-based Operator projects generated using the operator-sdk init --plugins ansible command contain the following directories and files:

File or directoryPurpose

Dockerfile

Dockerfile for building the container image for the Operator.

Makefile

Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD).

PROJECT

YAML file containing metadata information for the Operator.

config/crd

Base CRD files and the kustomization.yaml file settings.

config/default

Collects all Operator manifests for deployment. Use by the make deploy command.

config/manager

Controller manager deployment.

config/prometheus

ServiceMonitor resource for monitoring the Operator.

config/rbac

Role and role binding for leader election and authentication proxy.

config/samples

Sample resources created for the CRDs.

config/testing

Sample configurations for testing.

playbooks/

A subdirectory for the playbooks to run.

roles/

Subdirectory for the roles tree to run.

watches.yaml

Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the create api command.

requirements.yml

YAML file containing the Ansible collections and role dependencies to install during a build.

molecule/

Molecule scenarios for end-to-end testing of your role and Operator.

5.4.3. Updating projects for newer Operator SDK versions

Red Hat OpenShift Service on AWS 4 supports Operator SDK 1.36.1. If you already have the 1.31.0 CLI installed on your workstation, you can update the CLI to 1.36.1 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.36.1, update steps are required for the associated breaking changes introduced since 1.31.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.31.0.

5.4.3.1. Updating Ansible-based Operator projects for Operator SDK 1.36.1

The following procedure updates an existing Ansible-based Operator project for compatibility with 1.36.1.

Prerequisites

  • Operator SDK 1.36.1 installed
  • An Operator project created or maintained with Operator SDK 1.31.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.36.1-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.36.1-ocp

  2. Update the kube-rbac-proxy container to use the Red Hat Enterprise Linux (RHEL) 9-based image:

    1. Find the entry for the kube-rbac-proxy container in the following files:

      • config/default/manager_auth_proxy_patch.yaml
      • bundle/manifests/<operator_name>.clusterserviceversion.yaml for your Operator project, for example memcached-operator.clusterserviceversion.yaml from the tutorials
    2. Update the image name in the pull spec from ose-kube-rbac-proxy to ose-kube-rbac-proxy-rhel9, and update the tag to v4:

      Example ose-kube-rbac-proxy-rhel9 pull spec with v4 image tag

      # ...
            containers:
            - name: kube-rbac-proxy
              image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4
      # ...

  3. Edit your Operator’s Dockerfile to update the ose-ansible-rhel9-operator image tag to 4, as shown in the following example:

    Example Dockerfile

    FROM registry.redhat.io/openshift4/ose-ansible-rhel9-operator:v4

  4. The kustomize/v2 plugin is now stable and is the default version used in the plugin chain when using go/v4, ansible/v1, helm/v1, and hybrid/v1-alpha plugins. For more information on this default scaffold, see Kustomize v2 in the Kubebuilder documentation.
  5. If your Operator project uses a multi-platform, or multi-archicture, build, replace the existing docker-buildx target with following definition in your project Makefile:

    Example Makefile

    docker-buildx:
    ## Build and push the Docker image for the manager for multi-platform support
    	- docker buildx create --name project-v3-builder
    	docker buildx use project-v3-builder
    	- docker buildx build --push --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile .
    	- docker buildx rm project-v3-builder

  6. You must upgrade the Kubernetes versions in your Operator project to use 1.29. The following changes must be made in your project structure, Makefile, and go.mod files.

    Important

    The go/v3 plugin is being deprecated by Kubebuilder, therefore Operator SDK is also migrating to go/v4 in a future release.

    1. Update your go.mod file to upgrade your dependencies:

      k8s.io/api v0.29.2
      k8s.io/apimachinery v0.29.2
      k8s.io/client-go v0.29.2
      sigs.k8s.io/controller-runtime v0.17.3
    2. Download the upgraded dependencies by running the following command:

      $ go mod tidy
5.4.3.2. Additional resources

5.4.4. Ansible support in Operator SDK

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.4.1. Custom resource files

Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 5.1. Custom resource fields
FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 5.2. Ansible-based Operator annotations
AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

5.4.4.2. watches.yaml file

A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml.

Table 5.3. watches.yaml file mappings
FieldDescription

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example watches.yaml file

- version: v1alpha1 1
  group: test1.example.com
  kind: Test1
  role: /opt/ansible/roles/Test1

- version: v1alpha1 2
  group: test2.example.com
  kind: Test2
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 3
  group: test3.example.com
  kind: Test3
  playbook: /opt/ansible/test3.yml
  reconcilePeriod: 0
  manageStatus: false

1
Simple example mapping Test1 to the test1 role.
2
Simple example mapping Test2 to a playbook.
3
More complex example for the Test3 kind. Disables re-queuing and managing the CR status in the playbook.
5.4.4.2.1. Advanced options

Advanced features can be enabled by adding them to your watches.yaml file per GVK. They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.

Table 5.4. Advanced watches.yaml file options
FeatureYAML keyDescriptionAnnotation for overrideDefault value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansible.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR status section.

 

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

 

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

 

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example watches.yml file with advanced options

- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

5.4.4.3. Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message: "Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

---
- debug:
    msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.4.4.4. Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

5.4.5. Kubernetes Collection for Ansible

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.

5.4.5.1. Installing the Kubernetes Collection for Ansible

You can install the Kubernetes Collection for Ansible on your local workstation.

Procedure

  1. Install Ansible 2.15+:

    $ sudo dnf install ansible
  2. Install the Python Kubernetes client package:

    $ pip install kubernetes
  3. Install the Kubernetes Collection using one of the following methods:

    • You can install the collection directly from Ansible Galaxy:

      $ ansible-galaxy collection install community.kubernetes
    • If you have already initialized your Operator, you might have a requirements.yml file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs the community.kubernetes collection as well as the operator_sdk.util collection, which provides modules and plugins for Operator-specific functions.

      To install the dependent modules from the requirements.yml file:

      $ ansible-galaxy collection install -r requirements.yml
5.4.5.2. Testing the Kubernetes Collection locally

Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Prerequisites

  • Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
  • Install the Kubernetes Collection for Ansible

Procedure

  1. In your Ansible-based Operator project directory, modify the roles/<kind>/tasks/main.yml file with the Ansible logic that you want. The roles/<kind>/ directory is created when you use the --generate-role flag while creating an API. The <kind> replaceable matches the kind that you specified for the API.

    The following example creates and deletes a config map based on the value of a variable named state:

    ---
    - name: set ConfigMap example-config to {{ state }}
      community.kubernetes.k8s:
        api_version: v1
        kind: ConfigMap
        name: example-config
        namespace: <operator_namespace> 1
        state: "{{ state }}"
      ignore_errors: true 2
    1
    Specify the namespace where you want the config map created.
    2
    Setting ignore_errors: true ensures that deleting a nonexistent config map does not fail.
  2. Modify the roles/<kind>/defaults/main.yml file to set state to present by default:

    ---
    state: present
  3. Create an Ansible playbook by creating a playbook.yml file in the top-level of your project directory, and include your <kind> role:

    ---
    - hosts: localhost
      roles:
        - <kind>
  4. Run the playbook:

    $ ansible-playbook playbook.yml

    Example output

    [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ********************************************************************************
    
    TASK [Gathering Facts] ********************************************************************************
    ok: [localhost]
    
    TASK [memcached : set ConfigMap example-config to present] ********************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  5. Verify that the config map was created:

    $ oc get configmaps

    Example output

    NAME               DATA   AGE
    example-config     0      2m1s

  6. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent

    Example output

    [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ********************************************************************************
    
    TASK [Gathering Facts] ********************************************************************************
    ok: [localhost]
    
    TASK [memcached : set ConfigMap example-config to absent] ********************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  7. Verify that the config map was deleted:

    $ oc get configmaps
5.4.5.3. Next steps

5.4.6. Using Ansible inside an Operator

After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml file.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.6.1. Custom resource files

Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 5.5. Custom resource fields
FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 5.6. Ansible-based Operator annotations
AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

5.4.6.2. Testing an Ansible-based Operator locally

You can test the logic inside of an Ansible-based Operator running locally by using the make run command from the top-level directory of your Operator project. The make run Makefile target runs the ansible-operator binary locally, which reads from the watches.yaml file and uses your ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s modules do.

Note

You can customize the roles path by setting the environment variable ANSIBLE_ROLES_PATH or by using the ansible-roles-path flag. If the role is not found in the ANSIBLE_ROLES_PATH value, the Operator looks for it in {{current directory}}/roles.

Prerequisites

Procedure

  1. Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):

    $ make install

    Example output

    /usr/bin/kustomize build config/crd | kubectl apply -f -
    customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created

  2. Run the make run command:

    $ make run

    Example output

    /home/user/memcached-operator/bin/ansible-operator run
    {"level":"info","ts":1612739145.2871568,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"}
    ...
    {"level":"info","ts":1612739148.347306,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":1612739148.3488882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2}
    {"level":"info","ts":1612739148.3490262,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
    {"level":"info","ts":1612739148.3490646,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"}
    {"level":"info","ts":1612739148.350217,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
    {"level":"info","ts":1612739148.3506632,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1612739148.350784,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
    {"level":"info","ts":1612739148.5511978,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612739148.5512562,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}

    With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.

    Note

    Consider an example config/samples/<gvk>.yaml CR manifest:

    apiVersion: <group>.example.com/v1alpha1
    kind: <kind>
    metadata:
      name: "<kind>-sample"

    Because the spec field is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.

  3. Create an instance of your CR with the default variable state set to present:

    $ oc apply -f config/samples/<gvk>.yaml
  4. Check that the example-config config map was created:

    $ oc get configmaps

    Example output

    NAME                    STATUS    AGE
    example-config          Active    3s

  5. Modify your config/samples/<gvk>.yaml file to set the state field to absent. For example:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    spec:
      state: absent
  6. Apply the changes:

    $ oc apply -f config/samples/<gvk>.yaml
  7. Confirm that the config map is deleted:

    $ oc get configmap
5.4.6.3. Testing an Ansible-based Operator on the cluster

After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an Red Hat OpenShift Service on AWS cluster, which is preferred for production use.

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.4.6.4. Ansible logs

Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.

5.4.6.4.1. Viewing Ansible logs

Prerequisites

  • Ansible-based Operator running as a deployment on a cluster

Procedure

  • To view logs from an Ansible-based Operator, run the following command:

    $ oc logs deployment/<project_name>-controller-manager \
        -c manager \1
        -n <namespace> 2
    1
    View logs from the manager container.
    2
    If you used the make deploy command to run the Operator as a deployment, use the <project_name>-system namespace.

    Example output

    {"level":"info","ts":1612732105.0579333,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"}
    {"level":"info","ts":1612732105.0587437,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
    I0207 21:08:26.110949       7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s
    {"level":"info","ts":1612732107.768025,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
    {"level":"info","ts":1612732107.768796,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2}
    {"level":"info","ts":1612732107.7688773,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
    {"level":"info","ts":1612732107.7688901,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"}
    {"level":"info","ts":1612732107.770032,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
    I0207 21:08:27.770185       7 leaderelection.go:243] attempting to acquire leader lease  memcached-operator-system/memcached-operator...
    {"level":"info","ts":1612732107.770202,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    I0207 21:08:27.784854       7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator
    {"level":"info","ts":1612732107.7850506,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
    {"level":"info","ts":1612732107.8853772,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612732107.8854098,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}

5.4.6.4.2. Enabling full Ansible results in logs

You can set the environment variable ANSIBLE_DEBUG_LOGS to True to enable checking the full Ansible result in logs, which can be helpful when debugging.

Procedure

  • Edit the config/manager/manager.yaml and config/default/manager_auth_proxy_patch.yaml files to include the following configuration:

          containers:
          - name: manager
            env:
            - name: ANSIBLE_DEBUG_LOGS
              value: "True"
5.4.6.4.3. Enabling verbose debugging in logs

While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.

Procedure

  • Add the ansible.sdk.operatorframework.io/verbosity annotation to your custom resource to enable the verbosity level that you want. For example:

    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "example-memcached"
      annotations:
        "ansible.sdk.operatorframework.io/verbosity": "4"
    spec:
      size: 4

5.4.7. Custom resource status management

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.7.1. About custom resource status in Ansible-based Operators

Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
  - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

5.4.7.2. Tracking custom resource status manually

You can use the operator_sdk.util collection to modify your Ansible-based Operator to track custom resource (CR) status manually from your application.

Prerequisites

  • Ansible-based Operator project created by using the Operator SDK

Procedure

  1. Update the watches.yaml file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: <kind>
      role: <role>
      manageStatus: false
  2. Use the operator_sdk.util.k8s_status Ansible module to update the subresource. For example, to update with key test and value data, operator_sdk.util can be used as shown:

    - operator_sdk.util.k8s_status:
        api_version: app.example.com/v1
        kind: <kind>
        name: "{{ ansible_operator_meta.name }}"
        namespace: "{{ ansible_operator_meta.namespace }}"
        status:
          test: data
  3. You can declare collections in the meta/main.yml file for the role, which is included for scaffolded Ansible-based Operators:

    collections:
      - operator_sdk.util
  4. After declaring collections in the role meta, you can invoke the k8s_status module directly:

    k8s_status:
      ...
      status:
        key1: value1

5.5. Helm-based Operators

5.5.1. Operator SDK tutorial for Helm-based Operators

Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:

  • Create a Nginx deployment
  • Ensure that the deployment size is the same as specified by the Nginx custom resource (CR) spec
  • Update the Nginx CR status using the status writer with the names of the nginx pods
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators in the OpenShift Container Platform documentation.

5.5.1.1. Prerequisites
  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4+ installed
  • Logged into an Red Hat OpenShift Service on AWS cluster with oc with an account that has dedicated-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.1.2. Creating a project

Use the Operator SDK CLI to create a project called nginx-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/nginx-operator
  2. Change to the directory:

    $ cd $HOME/projects/nginx-operator
  3. Run the operator-sdk init command with the helm plugin to initialize the project:

    $ operator-sdk init \
        --plugins=helm \
        --domain=example.com \
        --group=demo \
        --version=v1 \
        --kind=Nginx
    Note

    By default, the helm plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the --helm-chart flag, to initialize a project using an existing Helm chart.

    The init command creates the nginx-operator project specifically for watching a resource with API version example.com/v1 and kind Nginx.

  4. For Helm-based projects, the init command generates the RBAC rules in the config/rbac/role.yaml file based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.
5.5.1.2.1. Existing Helm charts

Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:

  • --helm-chart
  • --helm-chart-repo
  • --helm-chart-version

If the --helm-chart flag is specified, the --group, --version, and --kind flags become optional. If left unset, the following default values are used:

FlagValue

--domain

my.domain

--group

charts

--version

v1

--kind

Deduced from the specified chart

If the --helm-chart flag specifies a local chart archive, for example example-chart-1.2.0.tgz, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.

If a custom repository URL is not specified by the --helm-chart-repo flag, the following chart reference formats are supported:

FormatDescription

<repo_name>/<chart_name>

Fetch the Helm chart named <chart_name> from the helm chart repository named <repo_name>, as specified in the $HELM_HOME/repositories/repositories.yaml file. Use the helm repo add command to configure this file.

<url>

Fetch the Helm chart archive at the specified URL.

If a custom repository URL is specified by --helm-chart-repo, the following chart reference format is supported:

FormatDescription

<chart_name>

Fetch the Helm chart named <chart_name> in the Helm chart repository specified by the --helm-chart-repo URL value.

If the --helm-chart-version flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version flag is not used when the chart specified with the --helm-chart flag refers to a specific version, for example when it is a local path or a URL.

For more details and examples, run:

$ operator-sdk init --plugins helm --help
5.5.1.2.2. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Helm. For example:

domain: example.com
layout:
- helm.sdk.operatorframework.io/v1
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
projectName: nginx-operator
resources:
- api:
    crdVersion: v1
    namespaced: true
  domain: example.com
  group: demo
  kind: Nginx
  version: v1
version: "3"
5.5.1.3. Understanding the Operator logic

For this example, the nginx-operator project executes the following reconciliation logic for each Nginx custom resource (CR):

  • Create an Nginx deployment if it does not exist.
  • Create an Nginx service if it does not exist.
  • Create an Nginx ingress if it is enabled and does not exist.
  • Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the Nginx CR, for example the replica count, image, and service type.

By default, the nginx-operator project watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

# Use the 'create api' subcommand to add watches to this file.
- group: demo
  version: v1
  kind: Nginx
  chart: helm-charts/nginx
# +kubebuilder:scaffold:watch
5.5.1.3.1. Sample Helm chart

When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.

For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

If you are not already familiar with Helm charts, review the Helm developer documentation.

5.5.1.3.2. Modifying the custom resource spec

Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.

You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.

Procedure

  1. The helm-charts/nginx/values.yaml file has a value called replicaCount set to 1 by default. To have two Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

    Edit the config/samples/demo_v1_nginx.yaml file to set replicaCount: 2:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 2
  2. Similarly, the default service port is set to 80. To use 8080, edit the config/samples/demo_v1_nginx.yaml file to set spec.port: 8080,which adds the service port override:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    spec:
      replicaCount: 2
      service:
        port: 8080

The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command.

5.5.1.4. Enabling proxy support

Operator authors can develop Operators that support network proxies. Administrators with the dedicated-admin role configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Edit the watches.yaml file to include overrides based on an environment variable by adding the overrideValues field:

    ...
    - group: demo.example.com
      version: v1alpha1
      kind: Nginx
      chart: helm-charts/nginx
      overrideValues:
        proxy.http: $HTTP_PROXY
    ...
  2. Add the proxy.http value in the helm-charts/nginx/values.yaml file:

    ...
    proxy:
      http: ""
      https: ""
      no_proxy: ""
  3. To make sure the chart template supports using the variables, edit the chart template in the helm-charts/nginx/templates/deployment.yaml file to contain the following:

    containers:
      - name: {{ .Chart.Name }}
        securityContext:
          - toYaml {{ .Values.securityContext | nindent 12 }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
          - name: http_proxy
            value: "{{ .Values.proxy.http }}"
  4. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"
5.5.1.5. Running the Operator

To build and run your Operator, use the Operator SDK CLI to bundle your Operator, and then use Operator Lifecycle Manager (OLM) to deploy on the cluster.

Note

If you wish to deploy your Operator on an OpenShift Container Platform cluster instead of a Red Hat OpenShift Service on AWS cluster, two additional deployment options are available:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.

Additional resources

5.5.1.5.1. Bundling an Operator and deploying with Operator Lifecycle Manager
5.5.1.5.1.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4+ installed
  • Operator project initialized by using the Operator SDK

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.5.1.5.1.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on Red Hat OpenShift Service on AWS and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example Red Hat OpenShift Service on AWS 4)
  • Logged in to the cluster with oc using an account with dedicated-admin permissions

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of Red Hat OpenShift Service on AWS 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.
5.5.1.6. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Nginx Operator, which provides the Nginx CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project nginx-operator-system
  2. Edit the sample Nginx CR manifest at config/samples/demo_v1_nginx.yaml to contain the following specification:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 3
  3. The Nginx service account requires privileged access to run in Red Hat OpenShift Service on AWS. Add the following security context constraint (SCC) to the service account for the nginx-sample pod:

    $ oc adm policy add-scc-to-user \
        anyuid system:serviceaccount:nginx-operator-system:nginx-sample
  4. Create the CR:

    $ oc apply -f config/samples/demo_v1_nginx.yaml
  5. Ensure that the Nginx Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-operator-controller-manager       1/1     1            1           8m
    nginx-sample                            3/3     3            3           1m

  6. Check the pods and CR status to confirm the status is updated with the Nginx pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      nginx-sample-6fd7c98d8-7dqdr          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-g5k7v          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-m7vn7          1/1       Running   0          1m

    2. Check the CR status:

      $ oc get nginx/nginx-sample -o yaml

      Example output

      apiVersion: demo.example.com/v1
      kind: Nginx
      metadata:
      ...
        name: nginx-sample
      ...
      spec:
        replicaCount: 3
      status:
        nodes:
        - nginx-sample-6fd7c98d8-7dqdr
        - nginx-sample-6fd7c98d8-g5k7v
        - nginx-sample-6fd7c98d8-m7vn7

  7. Update the deployment size.

    1. Update config/samples/demo_v1_nginx.yaml file to change the spec.size field in the Nginx CR from 3 to 5:

      $ oc patch nginx nginx-sample \
          -p '{"spec":{"replicaCount": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      nginx-operator-controller-manager       1/1     1            1           10m
      nginx-sample                            5/5     5            5           3m

  8. Delete the CR by running the following command:

    $ oc delete -f config/samples/demo_v1_nginx.yaml
  9. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>
5.5.1.7. Additional resources

5.5.2. Project layout for Helm-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.5.2.1. Helm-based project layout

Helm-based Operator projects generated using the operator-sdk init --plugins helm command contain the following directories and files:

File/foldersPurpose

config/

Kustomize manifests for deploying the Operator on a Kubernetes cluster.

helm-charts/

Helm chart initialized with the operator-sdk create api command.

Dockerfile

Used to build the Operator image with the make docker-build command.

watches.yaml

Group/version/kind (GVK) and Helm chart location.

Makefile

Targets used to manage the project.

PROJECT

YAML file containing metadata information for the Operator.

5.5.3. Updating Helm-based projects for newer Operator SDK versions

Red Hat OpenShift Service on AWS 4 supports Operator SDK 1.36.1. If you already have the 1.31.0 CLI installed on your workstation, you can update the CLI to 1.36.1 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.36.1, update steps are required for the associated breaking changes introduced since 1.31.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.31.0.

5.5.3.1. Updating Helm-based Operator projects for Operator SDK 1.36.1

The following procedure updates an existing Helm-based Operator project for compatibility with 1.36.1.

Prerequisites

  • Operator SDK 1.36.1 installed
  • An Operator project created or maintained with Operator SDK 1.31.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.36.1-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.36.1-ocp

  2. Update the kube-rbac-proxy container to use the Red Hat Enterprise Linux (RHEL) 9-based image:

    1. Find the entry for the kube-rbac-proxy container in the following files:

      • config/default/manager_auth_proxy_patch.yaml
      • bundle/manifests/<operator_name>.clusterserviceversion.yaml for your Operator project, for example memcached-operator.clusterserviceversion.yaml from the tutorials
    2. Update the image name in the pull spec from ose-kube-rbac-proxy to ose-kube-rbac-proxy-rhel9, and update the tag to v4:

      Example ose-kube-rbac-proxy-rhel9 pull spec with v4 image tag

      # ...
            containers:
            - name: kube-rbac-proxy
              image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4
      # ...

  3. Edit your Operator’s Dockerfile to update the ose-helm-rhel9-operator image tag to 4, as shown in the following example:

    Example Dockerfile

    FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4

  4. The kustomize/v2 plugin is now stable and is the default version used in the plugin chain when using go/v4, ansible/v1, helm/v1, and hybrid/v1-alpha plugins. For more information on this default scaffold, see Kustomize v2 in the Kubebuilder documentation.
  5. If your Operator project uses a multi-platform, or multi-archicture, build, replace the existing docker-buildx target with following definition in your project Makefile:

    Example Makefile

    docker-buildx:
    ## Build and push the Docker image for the manager for multi-platform support
    	- docker buildx create --name project-v3-builder
    	docker buildx use project-v3-builder
    	- docker buildx build --push --platform=$(PLATFORMS) --tag ${IMG} -f Dockerfile .
    	- docker buildx rm project-v3-builder

  6. You must upgrade the Kubernetes versions in your Operator project to use 1.29. The following changes must be made in your project structure, Makefile, and go.mod files.

    Important

    The go/v3 plugin is being deprecated by Kubebuilder, therefore Operator SDK is also migrating to go/v4 in a future release.

    1. Update your go.mod file to upgrade your dependencies:

      k8s.io/api v0.29.2
      k8s.io/apimachinery v0.29.2
      k8s.io/client-go v0.29.2
      sigs.k8s.io/controller-runtime v0.17.3
    2. Download the upgraded dependencies by running the following command:

      $ go mod tidy
  7. Update the Kustomize version in your Makefile by making the following changes:

    - curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.2.1/kustomize_v5.2.1_$(OS)_$(ARCH).tar.gz | \
    + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_$(OS)_$(ARCH).tar.gz | \
5.5.3.2. Additional resources

5.5.4. Helm support in Operator SDK

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS 4 to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.5.4.1. Helm charts

One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.

The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec field of the object is a list of configuration options that are typically described in the Helm values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.

For an example of a simple CR called Tomcat:

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

The replicaCount value, 2 in this case, is propagated into the template of the chart where the following is used:

{{ .Values.replicaCount }}

After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:

$ oc get Tomcats --all-namespaces

There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.

5.6. Defining cluster service versions (CSVs)

A cluster service version (CSV), defined by a ClusterServiceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (C