Operators
Working with Operators in OpenShift Container Platform
Abstract
Chapter 1. Operators overview Copy linkLink copied to clipboard!
Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run.
Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state.
While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose:
- Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions.
- Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications.
With Operators, you can create applications to monitor the running services in the cluster. Operators are designed specifically for your applications. Operators implement and automate the common Day 1 operations such as installation and configuration as well as Day 2 operations such as autoscaling up and down and creating backups. All these activities are in a piece of software running inside your cluster.
1.1. For developers Copy linkLink copied to clipboard!
As a developer, you can perform the following Operator tasks:
- Install Operator SDK CLI.
- Create Go-based Operators, Ansible-based Operators, Java-based Operators, and Helm-based Operators.
- Use Operator SDK to build, test, and deploy an Operator.
- Install and subscribe an Operator to your namespace.
- Create an application from an installed Operator through the web console.
1.2. For administrators Copy linkLink copied to clipboard!
As a cluster administrator, you can perform the following Operator tasks:
- Manage custom catalogs.
- Allow non-cluster administrators to install Operators.
- Install an Operator from OperatorHub.
- View Operator status.
- Manage Operator conditions.
- Upgrade installed Operators.
- Delete installed Operators.
- Configure proxy support.
- Use Operator Lifecycle Manager on restricted networks.
To know all about the cluster Operators that Red Hat provides, see Cluster Operators reference.
1.3. Next steps Copy linkLink copied to clipboard!
To understand more about Operators, see What are Operators?
Chapter 2. Understanding Operators Copy linkLink copied to clipboard!
2.1. What are Operators? Copy linkLink copied to clipboard!
Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers.
Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor’s engineering team, monitoring a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.
More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application.
A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes.
2.1.1. Why use Operators? Copy linkLink copied to clipboard!
Operators provide:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and ISV content.
- A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two.
- Why deploy on Kubernetes?
- Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems – secret handling, load balancing, service discovery, autoscaling – that work across on-premise and cloud providers.
- Why manage your app with Kubernetes APIs and
kubectltooling? -
These APIs are feature rich, have clients for all platforms and plug into the cluster’s access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example
MongoDB, looks and acts just like the built-in, native Kubernetes objects. - How do Operators compare with service brokers?
- A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well.
2.1.2. Operator Framework Copy linkLink copied to clipboard!
The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems:
- Operator SDK
- The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities.
- Operator Lifecycle Manager
- Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Container Platform 4.14.
- Operator Registry
- The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM.
- OperatorHub
- OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform.
These tools are designed to be composable, so you can use any that are useful to you.
2.1.3. Operator maturity model Copy linkLink copied to clipboard!
The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator.
One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator:
Figure 2.1. Operator maturity model
The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK.
2.2. Operator Framework packaging format Copy linkLink copied to clipboard!
This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.2.1. Bundle format Copy linkLink copied to clipboard!
The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata.
An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image, which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay.
Operator metadata can include:
- Information that identifies the Operator, for example its name and version.
- Additional information that drives the UI, for example its icon and some example custom resources (CRs).
- Required and provided APIs.
- Related images.
When loading manifests into the Operator Registry database, the following requirements are validated:
- The bundle must have at least one channel defined in the annotations.
- Every bundle has exactly one cluster service version (CSV).
- If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle.
2.2.1.1. Manifests Copy linkLink copied to clipboard!
Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator.
A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory.
Example bundle format layout
2.2.1.1.1. Additionally supported objects Copy linkLink copied to clipboard!
The following object types can also be optionally included in the /manifests directory of a bundle:
Supported optional object types
-
ClusterRole -
ClusterRoleBinding -
ConfigMap -
ConsoleCLIDownload -
ConsoleLink -
ConsoleQuickStart -
ConsoleYamlSample -
PodDisruptionBudget -
PriorityClass -
PrometheusRule -
Role -
RoleBinding -
Secret -
Service -
ServiceAccount -
ServiceMonitor -
VerticalPodAutoscaler
When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV:
Lifecycle for optional objects
- When the CSV is deleted, OLM deletes the optional object.
When the CSV is upgraded:
- If the name of the optional object is the same, OLM updates it in place.
- If the name of the optional object has changed between versions, OLM deletes and recreates it.
2.2.1.2. Annotations Copy linkLink copied to clipboard!
A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles:
Example annotations.yaml
- 1
- The media type or format of the Operator bundle. The
registry+v1format means it contains a CSV and its associated Kubernetes objects. - 2
- The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to
manifests/. The valuemanifests.v1implies that the bundle contains Operator manifests. - 3
- The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to
metadata/. The valuemetadata.v1implies that this bundle has Operator metadata. - 4
- The package name of the bundle.
- 5
- The list of channels the bundle is subscribing to when added into an Operator Registry.
- 6
- The default channel an Operator should be subscribed to when installed from a registry.
In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file.
2.2.1.3. Dependencies Copy linkLink copied to clipboard!
The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.
The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported:
olm.package-
This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as
0.5.2or a range of versions such as>0.5.1. olm.gvk- With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.
olm.constraint- This type declares generic constraints on arbitrary Operator properties.
In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:
Example dependencies.yaml file
2.2.1.4. About the opm CLI Copy linkLink copied to clipboard!
The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster.
A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster.
-
See CLI tools for steps on installing the
opmCLI.
2.2.2. File-based catalogs Copy linkLink copied to clipboard!
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility.
- Editing
With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the
jqCLI.This editability enables the following features and user-defined extensions:
- Promoting an existing bundle to a new channel
- Changing the default channel of a package
- Custom algorithms for adding, updating, and removing upgrade edges
- Composability
File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories:
catalogAandcatalogB. A catalog maintainer can create a new combined catalog by making a new directorycatalogCand copyingcatalogAandcatalogBinto it.This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these.
NoteDuplicate packages and duplicate bundles within a package are not permitted. The
opm validatecommand returns an error if any duplicates are found.Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users.
- Extensibility
The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations.
For example, a tool could translate a high-level API, such as
(mode=semver), down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria.While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well.
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs and Mirroring images for a disconnected installation using the oc-mirror plugin.
2.2.2.1. Directory structure Copy linkLink copied to clipboard!
File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur.
Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files.
Example .indexignore file
Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package’s file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format.
Basic recommended structure
This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog could also be included in a parent catalog by copying it into the parent catalog’s root directory.
2.2.2.2. Schemas Copy linkLink copied to clipboard!
File-based catalogs use a format, based on the CUE language specification, that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to:
_Meta schema
No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE.
An Operator Lifecycle Manager (OLM) catalog currently uses three schemas (olm.package, olm.channel, and olm.bundle), which correspond to OLM’s existing package and bundle concepts.
Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs.
All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own.
2.2.2.2.1. olm.package schema Copy linkLink copied to clipboard!
The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon.
Example 2.1. olm.package schema
2.2.2.2.2. olm.channel schema Copy linkLink copied to clipboard!
The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles.
A bundle can included as an entry in multiple olm.channel blobs, but it can have only one entry per channel.
It is valid for an entry’s replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads.
Example 2.2. olm.channel schema
When using the skipRange field, the skipped Operator versions are pruned from the update graph and are therefore no longer installable by users with the spec.startingCSV property of Subscription objects.
If you want to have direct (one version increment) updates to an Operator version from multiple previous versions, and also keep those previous versions available to users for installation, always use the skipRange field along with the replaces field. Ensure that the replaces field points to the immediate previous version of the Operator version in question.
2.2.2.2.3. olm.bundle schema Copy linkLink copied to clipboard!
Example 2.3. olm.bundle schema
2.2.2.3. Properties Copy linkLink copied to clipboard!
Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML.
OLM defines a handful of property types, again using the reserved olm.* prefix.
2.2.2.3.1. olm.package property Copy linkLink copied to clipboard!
The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle’s first-class package field, and the version field must be a valid semantic version.
Example 2.4. olm.package property
2.2.2.3.2. olm.gvk property Copy linkLink copied to clipboard!
The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations.
Example 2.5. olm.gvk property
2.2.2.3.3. olm.package.required Copy linkLink copied to clipboard!
The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range.
Example 2.6. olm.package.required property
2.2.2.3.4. olm.gvk.required Copy linkLink copied to clipboard!
The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations.
Example 2.7. olm.gvk.required property
2.2.2.4. Example catalog Copy linkLink copied to clipboard!
With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog’s root directory.
There are many possible ways to build a file-based catalog; the following steps outline a simple approach:
Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog:
Example catalog configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run a script that parses the configuration file and creates a new catalog from its references:
Example script
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2.5. Guidelines Copy linkLink copied to clipboard!
Consider the following guidelines when maintaining file-based catalogs.
2.2.2.5.1. Immutable bundles Copy linkLink copied to clipboard!
The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable.
If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog.
However, there are some cases where a change in the catalog metadata is preferred:
-
Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another
olm.channelblob. -
New upgrade edges: If you release a new
1.2.zbundle version, for example1.2.4, but1.3.0is already released, you can update the catalog metadata for1.3.0to skip1.2.4.
2.2.2.5.2. Source control Copy linkLink copied to clipboard!
Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps:
- Update the source-controlled catalog directory with a new commit.
-
Build and push the catalog image. Use a consistent tagging taxonomy, such as
:latestor:<target_cluster_version>, so that users can receive updates to a catalog as they become available.
2.2.2.6. CLI usage Copy linkLink copied to clipboard!
For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs.
For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools.
2.2.2.7. Automation Copy linkLink copied to clipboard!
Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks:
- Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package’s image reference.
-
Check that the catalog updates pass the
opm validatecommand. - Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed.
- Automatically merge PRs that pass the previous checks.
- Automatically rebuild and republish the catalog image.
2.2.3. RukPak (Technology Preview) Copy linkLink copied to clipboard!
RukPak is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OpenShift Container Platform 4.12 introduces the platform Operator type as a Technology Preview feature. The platform Operator mechanism relies on the RukPak component, also introduced in OpenShift Container Platform 4.12, and its resources to manage content.
OpenShift Container Platform 4.14 introduces Operator Lifecycle Manager (OLM) 1.0 as a Technology Preview feature, which also relies on the RukPak component.
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
Common terminology
- Bundle
- A collection of Kubernetes manifests that define content to be deployed to a cluster
- Bundle image
- A container image that contains a bundle within its filesystem
- Bundle Git repository
- A Git repository that contains a bundle within a directory
- Provisioner
- Controllers that install and manage content on a Kubernetes cluster
- Bundle deployment
- Generates deployed instances of a bundle
2.2.3.1. Bundle Copy linkLink copied to clipboard!
A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content.
Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type.
Example Bundle object configured to work with the plain provisioner
Bundles are considered immutable after they are created.
2.2.3.1.1. Bundle immutability Copy linkLink copied to clipboard!
After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object.
Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status, are updated during the bundle’s lifecycle; it is only the spec field that is considered immutable.
Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle:
Example output
bundle.core.rukpak.io/combo-tag-ref created
bundle.core.rukpak.io/combo-tag-ref created
Then, patching the bundle to point to a newer tag returns an error:
oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
Example output
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place.
2.2.3.1.1.1. Further immutability considerations Copy linkLink copied to clipboard!
While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario:
-
A user sets an image tag, a Git branch, or a Git tag in the
spec.sourcefield of theBundleobject. - The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
- A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.
If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.
This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.
To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle.
2.2.3.1.2. Plain bundle spec Copy linkLink copied to clipboard!
A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.
The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0, combines the type of bundle (plain) with the current schema version (v0).
The plain+v0 bundle format is at schema version v0, which means it is an experimental format that is subject to change.
For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application.
Example plain+v0 bundle file tree
The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories.
Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well.
2.2.3.1.3. Registry bundle spec Copy linkLink copied to clipboard!
A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format.
2.2.3.2. BundleDeployment Copy linkLink copied to clipboard!
A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions.
The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle.
Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.
The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.
Example BundleDeployment object configured to work with the plain provisioner
2.2.3.3. About provisioners Copy linkLink copied to clipboard!
RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles.
Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.
2.3. Operator Framework glossary of common terms Copy linkLink copied to clipboard!
This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK.
2.3.1. Common Operator Framework terms Copy linkLink copied to clipboard!
2.3.1.1. Bundle Copy linkLink copied to clipboard!
In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster.
2.3.1.2. Bundle image Copy linkLink copied to clipboard!
In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub.
2.3.1.3. Catalog source Copy linkLink copied to clipboard!
A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies.
2.3.1.4. Channel Copy linkLink copied to clipboard!
A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest.
An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel.
2.3.1.5. Channel head Copy linkLink copied to clipboard!
A channel head refers to the latest known update in a particular channel.
2.3.1.6. Cluster service version Copy linkLink copied to clipboard!
A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version.
It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.
2.3.1.7. Dependency Copy linkLink copied to clipboard!
An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer.
OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles.
2.3.1.8. Index image Copy linkLink copied to clipboard!
In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool.
2.3.1.9. Install plan Copy linkLink copied to clipboard!
An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV.
2.3.1.10. Multitenancy Copy linkLink copied to clipboard!
A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams.
When a cluster is shared by multiple users or groups, it is considered a multitenant cluster.
2.3.1.11. Operator group Copy linkLink copied to clipboard!
An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide.
2.3.1.12. Package Copy linkLink copied to clipboard!
In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs.
2.3.1.13. Registry Copy linkLink copied to clipboard!
A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels.
2.3.1.14. Subscription Copy linkLink copied to clipboard!
A subscription keeps CSVs up to date by tracking a channel in a package.
2.3.1.15. Update graph Copy linkLink copied to clipboard!
An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added.
2.4. Operator Lifecycle Manager (OLM) Copy linkLink copied to clipboard!
2.4.1. Operator Lifecycle Manager concepts and resources Copy linkLink copied to clipboard!
This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.4.1.1. What is Operator Lifecycle Manager? Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework, an open source toolkit designed to manage Operators in an effective, automated, and scalable way.
Figure 2.2. Operator Lifecycle Manager workflow
OLM runs by default in OpenShift Container Platform 4.14, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.
For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.
2.4.1.2. OLM resources Copy linkLink copied to clipboard!
The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM):
| Resource | Short name | Description |
|---|---|---|
|
|
| Application metadata. For example: name, version, icon, required resources. |
|
|
| A repository of CSVs, CRDs, and packages that define an application. |
|
|
| Keeps CSVs up to date by tracking a channel in a package. |
|
|
| Calculated list of resources to be created to automatically install or upgrade a CSV. |
|
|
|
Configures all Operators deployed in the same namespace as the |
|
| - |
Creates a communication channel between OLM and an Operator it manages. Operators can write to the |
2.4.1.2.1. Cluster service version Copy linkLink copied to clipboard!
A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster.
OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm, deb, or apk bundle.
A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo.
A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment.
2.4.1.2.2. Catalog source Copy linkLink copied to clipboard!
A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Container Platform web console also displays the Operators provided by catalog sources.
Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration → Cluster Settings → Configuration → OperatorHub page in the web console.
The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API.
Example 2.8. Example CatalogSource object
- 1
- Name for the
CatalogSourceobject. This value is also used as part of the name for the related pod that is created in the requested namespace. - 2
- Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to
openshift-marketplace. The default Red Hat-provided catalog sources also use theopenshift-marketplacenamespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. - 3
- Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog’s index image version as part of cluster upgrades.
Set the
olm.catalogImageTemplateannotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites thespec.imagefield at run time. See the "Image template for custom catalog sources" section for more details. - 4
- Display name for the catalog in the web console and CLI.
- 5
- Index image for the catalog. Optionally, can be omitted when using the
olm.catalogImageTemplateannotation, which sets the pull spec at run time. - 6
- Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs.
- 7
- Source types include the following:
-
grpcwith animagereference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. -
grpcwith anaddressfield: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. -
configmap: OLM parses config map data and runs a pod that can serve the gRPC API over it.
-
- 8
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy. - 9
- Optional: For
grpctype catalog sources, overrides the default node selector for the pod serving the content inspec.image, if defined. - 10
- Optional: For
grpctype catalog sources, overrides the default priority class name for the pod serving the content inspec.image, if defined. Kubernetes providessystem-cluster-criticalandsystem-node-criticalpriority classes by default. Setting the field to empty ("") assigns the pod the default priority. Other priority classes can be defined manually. - 11
- Optional: For
grpctype catalog sources, overrides the default tolerations for the pod serving the content inspec.image, if defined. - 12
- Automatically check for new versions at a given interval to stay up-to-date.
- 13
- Last observed state of the catalog connection. For example:
-
READY: A connection is successfully established. -
CONNECTING: A connection is attempting to establish. -
TRANSIENT_FAILURE: A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back toCONNECTINGand try again.
See States of Connectivity in the gRPC documentation for more details.
-
- 14
- Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date.
- 15
- Status information for the catalog’s Operator Registry service.
Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator:
Example 2.9. Example Subscription object referencing a catalog source
2.4.1.2.2.1. Image template for custom catalog sources Copy linkLink copied to clipboard!
Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Container Platform 4.14.
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.13 to 4.14, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:
registry.redhat.io/redhat/redhat-operator-index:v4.13
registry.redhat.io/redhat/redhat-operator-index:v4.13
to:
registry.redhat.io/redhat/redhat-operator-index:v4.14
registry.redhat.io/redhat/redhat-operator-index:v4.14
However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image.
Starting in OpenShift Container Platform 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template:
-
kube_major_version -
kube_minor_version -
kube_patch_version
You must specify the Kubernetes cluster version and not an OpenShift Container Platform cluster version, as the latter is not currently available for templating.
Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path.
You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade.
Example 2.10. Example catalog source with an image template
If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value.
If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition.
For an OpenShift Container Platform 4.14 cluster, which uses Kubernetes 1.27, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference:
quay.io/example-org/example-catalog:v1.27
quay.io/example-org/example-catalog:v1.27
For future releases of OpenShift Container Platform, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Container Platform version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Container Platform version would then automatically update the catalog’s index image as well.
2.4.1.2.2.2. Catalog health requirements Copy linkLink copied to clipboard!
Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster.
For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A.
As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator.
As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog.
2.4.1.2.3. Subscription Copy linkLink copied to clipboard!
A subscription, defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source.
Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster.
Example Subscription object
This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha, beta, or stable, helps determine which Operator stream should be installed from the catalog source.
The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).
In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster.
2.4.1.2.4. Install plan Copy linkLink copied to clipboard!
An install plan, defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV).
To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator.
The install plan must then be approved according to one of the following approval strategies:
-
If the subscription’s
spec.installPlanApprovalfield is set toAutomatic, the install plan is approved automatically. -
If the subscription’s
spec.installPlanApprovalfield is set toManual, the install plan must be manually approved by a cluster administrator or user with proper permissions.
After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription.
Example 2.11. Example InstallPlan object
2.4.1.2.5. Operator groups Copy linkLink copied to clipboard!
An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.
The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments.
Additional resources
2.4.1.2.6. Operator conditions Copy linkLink copied to clipboard!
As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator.
OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource.
By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic.
2.4.2. Operator Lifecycle Manager architecture Copy linkLink copied to clipboard!
This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.4.2.1. Component responsibilities Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator.
Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework:
| Resource | Short name | Owner | Description |
|---|---|---|---|
|
|
| OLM | Application metadata: name, version, icon, required resources, installation, and so on. |
|
|
| Catalog | Calculated list of resources to be created to automatically install or upgrade a CSV. |
|
|
| Catalog | A repository of CSVs, CRDs, and packages that define an application. |
|
|
| Catalog | Used to keep CSVs up to date by tracking a channel in a package. |
|
|
| OLM |
Configures all Operators deployed in the same namespace as the |
Each of these Operators is also responsible for creating the following resources:
| Resource | Owner |
|---|---|
|
| OLM |
|
| |
|
| |
|
| |
|
| Catalog |
|
|
2.4.2.2. OLM Operator Copy linkLink copied to clipboard!
The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster.
The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application.
The OLM Operator uses the following workflow:
- Watch for cluster service versions (CSVs) in a namespace and check that requirements are met.
If requirements are met, run the install strategy for the CSV.
NoteA CSV must be an active member of an Operator group for the install strategy to run.
2.4.2.3. Catalog Operator Copy linkLink copied to clipboard!
The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions.
To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user.
The Catalog Operator uses the following workflow:
- Connect to each catalog source in the cluster.
Watch for unresolved install plans created by a user, and if found:
- Find the CSV matching the name requested and add the CSV as a resolved resource.
- For each managed or required CRD, add the CRD as a resolved resource.
- For each required CRD, find the CSV that manages it.
- Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically.
- Watch for catalog sources and subscriptions and create install plans based on them.
2.4.2.4. Catalog Registry Copy linkLink copied to clipboard!
The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels.
A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version.
2.4.3. Operator Lifecycle Manager workflow Copy linkLink copied to clipboard!
This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.4.3.1. Operator installation and upgrade workflow in OLM Copy linkLink copied to clipboard!
In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades:
-
ClusterServiceVersion(CSV) -
CatalogSource -
Subscription
Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators.
Figure 2.3. Catalog source overview
Within a catalog source, Operators are organized into packages and streams of updates called channels, which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers.
Figure 2.4. Packages and channels in a Catalog source
A user indicates a particular package and channel in a particular catalog source in a subscription, for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed.
OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog → channel → package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository.
Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates:
Figure 2.5. OLM graph of available channel updates
Example channels in a package
For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV.
2.4.3.1.1. Example upgrade path Copy linkLink copied to clipboard!
For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1. OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2, which in turn replaces the older and installed CSV version 0.1.1.
OLM walks back from the channel head to previous versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 → 0.1.2 → 0.1.1; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head.
For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1. Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2. At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed.
2.4.3.1.2. Skipping upgrades Copy linkLink copied to clipboard!
The basic path for upgrades in OLM is:
- A catalog source is updated with one or more updates to an Operator.
- OLM traverses every version of the Operator until reaching the latest version the catalog source contains.
However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability.
In those cases, OLM must consider two cluster states and provide an update graph that supports both:
- The "bad" intermediate Operator has been seen by the cluster and installed.
- The "bad" intermediate Operator has not yet been installed onto the cluster.
By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet.
Example CSV with skipped release
Consider the following example of Old CatalogSource and New CatalogSource.
Figure 2.6. Skipping updates
This graph maintains that:
- Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
- Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
- If the bad update has not yet been installed, it will never be.
2.4.3.1.3. Replacing multiple Operators Copy linkLink copied to clipboard!
Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation:
olm.skipRange: <semver_range>
olm.skipRange: <semver_range>
where <semver_range> has the version range format supported by the semver library.
When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel.
The order of precedence is:
-
Channel head in the source specified by
sourceNameon the subscription, if the other criteria for skipping are met. -
The next Operator that replaces the current one, in the source specified by
sourceName. - Channel head in another source that is visible to the subscription, if the other criteria for skipping are met.
- The next Operator that replaces the current one in any source visible to the subscription.
Example CSV with skipRange
2.4.3.1.4. Z-stream support Copy linkLink copied to clipboard!
A z-stream, or patch release, must replace all previous z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog.
In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource:
Figure 2.7. Replacing several Operators
This graph maintains that:
- Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
- Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
- Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource.
- Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this.
2.4.4. Operator Lifecycle Manager dependency resolution Copy linkLink copied to clipboard!
This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.4.4.1. About dependency resolution Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm.
However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other.
As a result, OLM must never create the following scenarios:
- Install a set of Operators that require APIs that cannot be provided
- Update an Operator in a way that breaks another that depends upon it
This is made possible with two types of data:
| Properties | Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. |
| Constraints or dependencies | An Operator’s requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. |
OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed.
2.4.4.2. Operator properties Copy linkLink copied to clipboard!
All Operators in a catalog have the following properties:
olm.package- Includes the name of the package and the version of the Operator
olm.gvk- A single property for each provided API from the cluster service version (CSV)
Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle.
Example arbitrary property
properties:
- type: olm.kubeversion
value:
version: "1.16.0"
properties:
- type: olm.kubeversion
value:
version: "1.16.0"
2.4.4.2.1. Arbitrary properties Copy linkLink copied to clipboard!
Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime.
These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list.
Example arbitrary properties
This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints.
Additional resources
2.4.4.3. Operator dependencies Copy linkLink copied to clipboard!
The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.
The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported:
olm.package-
This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as
0.5.2or a range of versions such as>0.5.1. olm.gvk- With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.
olm.constraint- This type declares generic constraints on arbitrary Operator properties.
In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:
Example dependencies.yaml file
2.4.4.4. Generic constraints Copy linkLink copied to clipboard!
An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime.
The following keys denote the available constraint types:
gvk-
Type whose value and interpretation is identical to the
olm.gvktype package-
Type whose value and interpretation is identical to the
olm.packagetype cel- A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information
all,any,not-
Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as
gvkor a nested compound constraint
2.4.4.4.1. Common Expression Language (CEL) constraints Copy linkLink copied to clipboard!
The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint.
Example cel constraint
type: olm.constraint
value:
failureMessage: 'require to have "certified"'
cel:
rule: 'properties.exists(p, p.type == "certified")'
type: olm.constraint
value:
failureMessage: 'require to have "certified"'
cel:
rule: 'properties.exists(p, p.type == "certified")'
The CEL syntax supports a wide range of logical operators, such as AND and OR. As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint.
Example cel constraint with multiple rules
type: olm.constraint
value:
failureMessage: 'require to have "certified" and "stable" properties'
cel:
rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")'
type: olm.constraint
value:
failureMessage: 'require to have "certified" and "stable" properties'
cel:
rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")'
2.4.4.4.2. Compound constraints (all, any, not) Copy linkLink copied to clipboard!
Compound constraint types are evaluated following their logical definitions.
The following is an example of a conjunctive constraint (all) of two packages and one GVK. That is, they must all be satisfied by installed bundles:
Example all constraint
The following is an example of a disjunctive constraint (any) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles:
Example any constraint
The following is an example of a negation constraint (not) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set:
Example not constraint
The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set.
As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense.
2.4.4.4.3. Nested compound constraints Copy linkLink copied to clipboard!
A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type.
The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint:
Example nested compound constraint
The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks.
2.4.4.5. Dependency preferences Copy linkLink copied to clipboard!
There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear.
2.4.4.5.1. Catalog priority Copy linkLink copied to clipboard!
On OpenShift Container Platform cluster, OLM reads catalog sources to know which Operators are available for installation.
Example CatalogSource object
- 1
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy.
A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency.
There are two rules that govern catalog preference:
- Options in higher-priority catalogs are preferred to options in lower-priority catalogs.
- Options in the same catalog as the dependent are preferred to any other catalogs.
2.4.4.5.2. Channel ordering Copy linkLink copied to clipboard!
An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release (1.2, 1.3) or a release frequency (stable, fast).
It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels.
Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name.
2.4.4.5.3. Order within a channel Copy linkLink copied to clipboard!
There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs.
When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency.
Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first.
2.4.4.5.4. Other constraints Copy linkLink copied to clipboard!
In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants.
2.4.4.5.4.1. Subscription constraint Copy linkLink copied to clipboard!
A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated.
2.4.4.5.4.2. Package constraint Copy linkLink copied to clipboard!
Within a namespace, no two Operators may come from the same package.
2.4.4.6. CRD upgrades Copy linkLink copied to clipboard!
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
2.4.4.7. Dependency best practices Copy linkLink copied to clipboard!
When specifying dependencies, there are best practices you should consider.
- Depend on APIs or a specific version range of Operators
-
Operators can add or remove APIs at any time; always specify an
olm.gvkdependency on any APIs your Operators requires. The exception to this is if you are specifyingolm.packageconstraints instead. - Set a minimum version
The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible.
For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended.
For example:
-
TestOperator v1.0.0 provides v1alpha1 API version of the
MyObjectresource. -
TestOperator v1.0.1 adds a new field
spec.newfieldtoMyObject, but still at v1alpha1.
Your Operator might require the ability to write
spec.newfieldinto theMyObjectresource. Anolm.gvkconstraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0.Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional
olm.packageconstraint to set a minimum.-
TestOperator v1.0.0 provides v1alpha1 API version of the
- Omit a maximum version or allow a very wide range
Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency.
Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example,
>1.0.0 <2.0.0.Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the previous version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound.
NoteCluster administrators cannot override dependencies set by an Operator author.
However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example
> 1.0.0 !1.2.1.
2.4.4.8. Dependency caveats Copy linkLink copied to clipboard!
When specifying dependencies, there are caveats you should consider.
- No compound constraints (AND)
There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version
>1.1.0.This means that when specifying a dependency such as:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version
>3.1.0. Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other.- Cross-namespace compatibility
- OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa.
2.4.4.9. Example dependency resolution scenarios Copy linkLink copied to clipboard!
In the following examples, a provider is an Operator which "owns" a CRD or API service.
2.4.4.9.1. Example: Deprecating dependent APIs Copy linkLink copied to clipboard!
A and B are APIs (CRDs):
- The provider of A depends on B.
- The provider of B has a subscription.
- The provider of B updates to provide C but deprecates B.
This results in:
- B no longer has a provider.
- A no longer works.
This is a case OLM prevents with its upgrade strategy.
2.4.4.9.2. Example: Version deadlock Copy linkLink copied to clipboard!
A and B are APIs:
- The provider of A requires B.
- The provider of B requires A.
- The provider of A updates to (provide A2, require B2) and deprecate A.
- The provider of B updates to (provide B2, require A2) and deprecate B.
If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found.
This is another case OLM prevents with its upgrade strategy.
2.4.5. Operator groups Copy linkLink copied to clipboard!
This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
2.4.5.1. About Operator groups Copy linkLink copied to clipboard!
An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.
The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments.
2.4.5.2. Operator group membership Copy linkLink copied to clipboard!
An Operator is considered a member of an Operator group if the following conditions are true:
- The CSV of the Operator exists in the same namespace as the Operator group.
- The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group.
An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes:
| InstallModeType | Description |
|---|---|
|
| The Operator can be a member of an Operator group that selects its own namespace. |
|
| The Operator can be a member of an Operator group that selects one namespace. |
|
| The Operator can be a member of an Operator group that selects more than one namespace. |
|
|
The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string |
If the spec of a CSV omits an entry of InstallModeType, then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it.
2.4.5.3. Target namespace selection Copy linkLink copied to clipboard!
You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter:
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
You can alternatively specify a namespace using a label selector with the spec.selector parameter:
Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release.
If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces:
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: my-group
namespace: my-namespace
The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.
2.4.5.4. Operator group CSV annotations Copy linkLink copied to clipboard!
Member CSVs of an Operator group have the following annotations:
| Annotation | Description |
|---|---|
|
| Contains the name of the Operator group. |
|
| Contains the namespace of the Operator group. |
|
| Contains a comma-delimited string that lists the target namespace selection of the Operator group. |
All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants.
2.4.5.5. Provided APIs annotation Copy linkLink copied to clipboard!
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included.
Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource:
2.4.5.6. Role-based access control Copy linkLink copied to clipboard!
When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below:
| Cluster role | Label to match |
|---|---|
|
|
|
|
|
|
|
|
|
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict:
- Cluster roles for each API resource from a CRD
- Cluster roles for each API resource from an API service
- Additional roles and role bindings
| Cluster role | Settings |
|---|---|
|
|
Verbs on
Aggregation labels:
|
|
|
Verbs on
Aggregation labels:
|
|
|
Verbs on
Aggregation labels:
|
|
|
Verbs on
Aggregation labels:
|
| Cluster role | Settings |
|---|---|
|
|
Verbs on
Aggregation labels:
|
|
|
Verbs on
Aggregation labels:
|
|
|
Verbs on
Aggregation labels:
|
Additional roles and role bindings
-
If the CSV defines exactly one target namespace that contains
*, then a cluster role and corresponding cluster role binding are generated for each permission defined in thepermissionsfield of the CSV. All resources generated are given theolm.owner: <csv_name>andolm.owner.namespace: <csv_namespace>labels. -
If the CSV does not define exactly one target namespace that contains
*, then all roles and role bindings in the Operator namespace with theolm.owner: <csv_name>andolm.owner.namespace: <csv_namespace>labels are copied into the target namespace.
2.4.5.7. Copied CSVs Copy linkLink copied to clipboard!
OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there.
Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants.
Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV.
By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again.
Disable the
disableCopiedCSVsfield:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
disableCopiedCSVsfield:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.5.8. Static Operator groups Copy linkLink copied to clipboard!
An Operator group is static if its spec.staticProvidedAPIs field is set to true. As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources.
Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation:
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
2.4.5.9. Operator group intersection Copy linkLink copied to clipboard!
Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set.
A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces.
When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces.
2.4.5.9.1. Rules for intersection Copy linkLink copied to clipboard!
Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set:
If
trueand the CSV’s provided APIs are a subset of the Operator group’s:- Continue transitioning.
If
trueand the CSV’s provided APIs are not a subset of the Operator group’s:If the Operator group is static:
- Clean up any deployments that belong to the CSV.
-
Transition the CSV to a failed state with status reason
CannotModifyStaticOperatorGroupProvidedAPIs.
If the Operator group is not static:
-
Replace the Operator group’s
olm.providedAPIsannotation with the union of itself and the CSV’s provided APIs.
-
Replace the Operator group’s
If
falseand the CSV’s provided APIs are not a subset of the Operator group’s:- Clean up any deployments that belong to the CSV.
-
Transition the CSV to a failed state with status reason
InterOperatorGroupOwnerConflict.
If
falseand the CSV’s provided APIs are a subset of the Operator group’s:If the Operator group is static:
- Clean up any deployments that belong to the CSV.
-
Transition the CSV to a failed state with status reason
CannotModifyStaticOperatorGroupProvidedAPIs.
If the Operator group is not static:
-
Replace the Operator group’s
olm.providedAPIsannotation with the difference between itself and the CSV’s provided APIs.
-
Replace the Operator group’s
Failure states caused by Operator groups are non-terminal.
The following actions are performed each time an Operator group synchronizes:
- The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored.
-
The cluster set is compared to
olm.providedAPIs, and ifolm.providedAPIscontains any extra APIs, then those APIs are pruned. - All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV.
2.4.5.10. Limitations for multitenant Operator management Copy linkLink copied to clipboard!
OpenShift Container Platform provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator’s API versions must be the same.
Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster.
All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster.
The supported scenarios include the following:
- Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions)
- Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub
All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster.
2.4.5.11. Troubleshooting Operator groups Copy linkLink copied to clipboard!
2.4.5.11.1. Membership Copy linkLink copied to clipboard!
An install plan’s namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios:
- No Operator groups exist in the install plan’s namespace.
- Multiple Operator groups exist in the install plan’s namespace.
- An incorrect or non-existent service account name is specified in the Operator group.
If an install plan encounters an invalid Operator group, the CSV is not generated and the
InstallPlanresource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace:attenuated service account query failed - more than one operator group(s) are managing this namespace count=2
attenuated service account query failed - more than one operator group(s) are managing this namespace count=2Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
count=specifies the number of Operator groups in the namespace.-
If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason
UnsupportedOperatorGroup. CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection.
2.4.6. Multitenancy and Operator colocation Copy linkLink copied to clipboard!
This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM).
2.4.6.1. Colocation of Operators in a namespace Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated.
This default behavior manifests in two ways:
-
InstallPlanresources of pending updates includeClusterServiceVersion(CSV) resources of all other Operators that are in the same namespace. - All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual.
These scenarios can lead to the following issues:
- It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator.
- It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators.
These issues usually surface because, when installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace.
As a cluster administrator, you can bypass this default behavior manually by using the following workflow:
- Create a namespace for the installation of the Operator.
- Create a custom global Operator group, which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces.
- Install the desired Operator in the installation namespace.
If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces".
2.4.7. Operator conditions Copy linkLink copied to clipboard!
This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions.
2.4.7.1. About Operator conditions Copy linkLink copied to clipboard!
As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator.
OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource.
By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic.
2.4.7.2. Supported conditions Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) supports the following Operator conditions.
2.4.7.2.1. Upgradeable condition Copy linkLink copied to clipboard!
The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when:
- An Operator is about to start a critical process and should not be upgraded until the process is completed.
- An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded.
Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section.
Example Upgradeable Operator condition
2.4.8. Operator Lifecycle Manager metrics Copy linkLink copied to clipboard!
2.4.8.1. Exposed metrics Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack.
| Name | Description |
|---|---|
|
| Number of catalog sources. |
|
|
State of a catalog source. The value |
|
|
When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than |
|
| Number of CSVs successfully registered. |
|
|
When reconciling a CSV, represents whether a CSV version is in a |
|
| Monotonic count of CSV upgrades. |
|
| Number of install plans. |
|
| Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. |
|
| The duration of a dependency resolution attempt. |
|
| Number of subscriptions. |
|
|
Monotonic count of subscription syncs. Includes the |
2.4.9. Webhook management in Operator Lifecycle Manager Copy linkLink copied to clipboard!
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM.
2.5. Understanding OperatorHub Copy linkLink copied to clipboard!
2.5.1. About OperatorHub Copy linkLink copied to clipboard!
OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM).
Cluster administrators can choose from catalogs grouped into the following categories:
| Category | Description |
|---|---|
| Red Hat Operators | Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. |
| Certified Operators | Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. |
| Red Hat Marketplace | Certified software that can be purchased from Red Hat Marketplace. |
| Community Operators | Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. |
| Custom Operators | Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. |
Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions.
The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com.
2.5.2. OperatorHub architecture Copy linkLink copied to clipboard!
The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace.
2.5.2.1. OperatorHub custom resource Copy linkLink copied to clipboard!
The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments.
Example OperatorHub custom resource
2.6. Red Hat-provided Operator catalogs Copy linkLink copied to clipboard!
Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default.
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs, Operator Framework packaging format, and Mirroring images for a disconnected installation using the oc-mirror plugin.
2.6.1. About Operator catalogs Copy linkLink copied to clipboard!
An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog.
An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster.
As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content.
As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Container Platform 4.8 and later.
When creating custom catalog images, previous versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders must use the opm index command to manage index images.
2.6.2. About Red Hat-provided Operator catalogs Copy linkLink copied to clipboard!
The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces.
The following Operator catalogs are distributed by Red Hat:
| Catalog | Index image | Description |
|---|---|---|
|
|
| Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. |
|
|
| Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. |
|
|
| Certified software that can be purchased from Red Hat Marketplace. |
|
|
| Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. |
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:
registry.redhat.io/redhat/redhat-operator-index:v4.8
registry.redhat.io/redhat/redhat-operator-index:v4.8
to:
registry.redhat.io/redhat/redhat-operator-index:v4.9
registry.redhat.io/redhat/redhat-operator-index:v4.9
2.7. Operators in multitenant clusters Copy linkLink copied to clipboard!
The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Container Platform cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege.
Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements.
2.7.1. Default Operator install modes and behavior Copy linkLink copied to clipboard!
When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator’s capabilities:
- Single namespace
- Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace.
- All namespaces
-
Installs the Operator in the default
openshift-operatorsnamespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator’s suggested namespace.
This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace:
-
The
namespace-adminandnamespace-editroles can read/write to the Operator APIs, meaning they can use them. -
The
namespace-viewrole can read CR objects of that Operator.
For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator’s privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces.
2.7.2. Recommended solution for multitenant clusters Copy linkLink copied to clipboard!
While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow:
- Create a namespace for the tenant Operator that is separate from the tenant’s namespace.
- Create an Operator group for the tenant Operator scoped only to the tenant’s namespace.
- Install the Operator in the tenant Operator namespace.
As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.
This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters".
Limitations and considerations
This solution only works when the following constraints are met:
- All instances of the same Operator must be the same version.
- The Operator cannot have dependencies on other Operators.
- The Operator cannot ship a CRD conversion webhook.
You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions:
- The instance is not the newest version of the Operator.
- The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster.
As an administrator, use caution when allowing non-cluster administrators to install Operators self-sufficiently, as explained in "Allowing non-cluster administrators to install Operators". These tenants should only have access to a curated catalog of Operators that are known to not have dependencies. These tenants must also be forced to use the same version line of an Operator, to ensure the CRDs do not change. This requires the use of namespace-scoped catalogs and likely disabling the global default catalogs.
2.7.3. Operator colocation and Operator groups Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated.
For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) → Multitenancy and Operator colocation.
2.8. CRDs Copy linkLink copied to clipboard!
2.8.1. Extending the Kubernetes API with custom resource definitions Copy linkLink copied to clipboard!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so that custom objects managed by the Operator look and act just like the built-in, native Kubernetes objects. This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing CRDs.
2.8.1.1. Custom resource definitions Copy linkLink copied to clipboard!
In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects.
A custom resource definition (CRD) object defines a new, unique object type, called a kind, in the cluster and lets the Kubernetes API server handle its entire lifecycle.
Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects.
When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR.
Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin, edit, or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the RBAC policy of the cluster as if it was a built-in resource.
Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users.
While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.
2.8.1.2. Creating a custom resource definition Copy linkLink copied to clipboard!
To create custom resource (CR) objects, cluster administrators must first create a custom resource definition (CRD).
Prerequisites
-
Access to an OpenShift Container Platform cluster with
cluster-adminuser privileges.
Procedure
To create a CRD:
Create a YAML file that contains the following field types:
Example YAML file for a CRD
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
apiextensions.k8s.io/v1API. - 2
- Specify a name for the definition. This must be in the
<plural-name>.<group>format using the values from thegroupandpluralfields. - 3
- Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like
JoborScheduledJobcould be in the batch API group (such asbatch.api.example.com). A good practice is to use a fully-qualified-domain name (FQDN) of your organization. - 4
- Specify a version name to be used in the URL. Each API group can exist in multiple versions, for example
v1alpha,v1beta,v1. - 5
- Specify whether the custom objects are available to a project (
Namespaced) or all projects in the cluster (Cluster). - 6
- Specify the plural name to use in the URL. The
pluralfield is the same as a resource in an API URL. - 7
- Specify a singular name to use as an alias on the CLI and for display.
- 8
- Specify the kind of objects that can be created. The type can be in CamelCase.
- 9
- Specify a shorter string to match your resource on the CLI.
NoteBy default, a CRD is cluster-scoped and available to all projects.
Create the CRD object:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow A new RESTful API endpoint is created at:
/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/...
/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/...Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, using the example file, the following endpoint is created:
/apis/stable.example.com/v1/namespaces/*/crontabs/...
/apis/stable.example.com/v1/namespaces/*/crontabs/...Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now use this endpoint URL to create and manage CRs. The object kind is based on the
spec.kindfield of the CRD object you created.
2.8.1.3. Creating cluster roles for custom resource definitions Copy linkLink copied to clipboard!
Cluster administrators can grant permissions to existing cluster-scoped custom resource definitions (CRDs). If you use the admin, edit, and view default cluster roles, you can take advantage of cluster role aggregation for their rules.
You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template.
Prerequisites
- Create a CRD.
Procedure
Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An OpenShift Container Platform controller adds the rules that you specify to the default cluster roles.
Example YAML file for a cluster role definition
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
rbac.authorization.k8s.io/v1API. - 2 8
- Specify a name for the definition.
- 3
- Specify this label to grant permissions to the admin default role.
- 4
- Specify this label to grant permissions to the edit default role.
- 5 11
- Specify the group name of the CRD.
- 6 12
- Specify the plural name of the CRD that these rules apply to.
- 7 13
- Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the
adminandeditroles and only read permission to theviewrole. - 9
- Specify this label to grant permissions to the
viewdefault role. - 10
- Specify this label to grant permissions to the
cluster-readerdefault role.
Create the cluster role:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.1.4. Creating custom resources from a file Copy linkLink copied to clipboard!
After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification.
Prerequisites
- CRD added to the cluster by a cluster administrator.
Procedure
Create a YAML file for the CR. In the following example definition, the
cronSpecandimagecustom fields are set in a CR ofKind: CronTab. TheKindcomes from thespec.kindfield of the CRD object:Example YAML file for a CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the group name and API version (name/version) from the CRD.
- 2
- Specify the type in the CRD.
- 3
- Specify a name for the object.
- 4
- Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted.
- 5
- Specify conditions specific to the type of object.
After you create the file, create the object:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.1.5. Inspecting custom resources Copy linkLink copied to clipboard!
You can inspect custom resource (CR) objects that exist in your cluster using the CLI.
Prerequisites
- A CR object exists in a namespace to which you have access.
Procedure
To get information on a specific kind of a CR, run:
oc get <kind>
$ oc get <kind>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get crontab
$ oc get crontabCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME KIND my-new-cron-object CronTab.v1.stable.example.com
NAME KIND my-new-cron-object CronTab.v1.stable.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example:
oc get crontabs
$ oc get crontabsCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get crontab
$ oc get crontabCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ct
$ oc get ctCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also view the raw YAML data for a CR:
oc get <kind> -o yaml
$ oc get <kind> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get ct -o yaml
$ oc get ct -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.2. Managing resources from custom resource definitions Copy linkLink copied to clipboard!
This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs).
2.8.2.1. Custom resource definitions Copy linkLink copied to clipboard!
In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects.
A custom resource definition (CRD) object defines a new, unique object type, called a kind, in the cluster and lets the Kubernetes API server handle its entire lifecycle.
Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects.
Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users.
While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.
2.8.2.2. Creating custom resources from a file Copy linkLink copied to clipboard!
After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification.
Prerequisites
- CRD added to the cluster by a cluster administrator.
Procedure
Create a YAML file for the CR. In the following example definition, the
cronSpecandimagecustom fields are set in a CR ofKind: CronTab. TheKindcomes from thespec.kindfield of the CRD object:Example YAML file for a CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the group name and API version (name/version) from the CRD.
- 2
- Specify the type in the CRD.
- 3
- Specify a name for the object.
- 4
- Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted.
- 5
- Specify conditions specific to the type of object.
After you create the file, create the object:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.2.3. Inspecting custom resources Copy linkLink copied to clipboard!
You can inspect custom resource (CR) objects that exist in your cluster using the CLI.
Prerequisites
- A CR object exists in a namespace to which you have access.
Procedure
To get information on a specific kind of a CR, run:
oc get <kind>
$ oc get <kind>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get crontab
$ oc get crontabCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME KIND my-new-cron-object CronTab.v1.stable.example.com
NAME KIND my-new-cron-object CronTab.v1.stable.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example:
oc get crontabs
$ oc get crontabsCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get crontab
$ oc get crontabCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ct
$ oc get ctCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also view the raw YAML data for a CR:
oc get <kind> -o yaml
$ oc get <kind> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get ct -o yaml
$ oc get ct -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. User tasks Copy linkLink copied to clipboard!
3.1. Creating applications from installed Operators Copy linkLink copied to clipboard!
This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console.
3.1.1. Creating an etcd cluster using an Operator Copy linkLink copied to clipboard!
This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).
Prerequisites
- Access to an OpenShift Container Platform 4.14 cluster.
- The etcd Operator already installed cluster-wide by an administrator.
Procedure
-
Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called
my-etcd. Navigate to the Operators → Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator.
TipYou can get this list from the CLI using:
oc get csv
$ oc get csvCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the Installed Operators page, click the etcd Operator to view more details and available actions.
As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the
EtcdClusterresource). These objects work similar to the built-in native Kubernetes ones, such asDeploymentorReplicaSet, but contain logic specific to managing etcd.Create a new etcd cluster:
- In the etcd Cluster API box, click Create instance.
-
The next page allows you to make any modifications to the minimal starting template of an
EtcdClusterobject, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster.
Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.
Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.
All users with the
editrole in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:oc policy add-role-to-user edit <user> -n <target_project>
$ oc policy add-role-to-user edit <user> -n <target_project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.
3.2. Installing Operators in your namespace Copy linkLink copied to clipboard!
If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner.
3.2.1. Prerequisites Copy linkLink copied to clipboard!
- A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details.
3.2.2. About Operator installation with OperatorHub Copy linkLink copied to clipboard!
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose a specific namespace in which to install the Operator.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
3.2.3. Installing from OperatorHub using the web console Copy linkLink copied to clipboard!
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
Procedure
- Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
advancedto find the Advanced Cluster Management for Kubernetes Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
If the cluster is in AWS STS mode, enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field.
To create the role’s ARN, follow the procedure described in Preparing AWS account.
- If more than one update channel is available, select an Update channel.
Select Automatic or Manual approval strategy, as described earlier.
ImportantIf the web console shows that the cluster is in "STS mode", you must set Update approval to Manual.
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
NoteFor the All namespaces… installation mode, the status resolves to InstallSucceeded in the
openshift-operatorsnamespace, but the status is Copied if you check in other namespaces.If it does not:
-
Check the logs in any pods in the
openshift-operatorsproject (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
3.2.4. Installing from OperatorHub by using the CLI Copy linkLink copied to clipboard!
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
You have installed the OpenShift CLI (
oc).
Procedure
View the list of Operators available to the cluster from OperatorHub:
oc get packagemanifests -n openshift-marketplace
$ oc get packagemanifests -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
oc describe packagemanifests <operator_name> -n openshift-marketplace
$ oc describe packagemanifests <operator_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow An Operator group, defined by an
OperatorGroupobject, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespacesorSingleNamespacemode. If the Operator you intend to install uses theAllNamespacesmode, theopenshift-operatorsnamespace already has the appropriateglobal-operatorsOperator group in place.However, if the Operator uses the
SingleNamespacemode and you do not already have an appropriate Operator group in place, you must create one.Note-
The web console version of this procedure handles the creation of the
OperatorGroupandSubscriptionobjects automatically behind the scenes for you when choosingSingleNamespacemode. - You can only have one Operator group per namespace. For more information, see "Operator groups".
Create an
OperatorGroupobject YAML file, for exampleoperatorgroup.yaml:Example
OperatorGroupobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create the
OperatorGroupobject:oc apply -f operatorgroup.yaml
$ oc apply -f operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The web console version of this procedure handles the creation of the
Create a
Subscriptionobject YAML file to subscribe a namespace to an Operator, for examplesub.yaml:Example
SubscriptionobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For default
AllNamespacesinstall mode usage, specify theopenshift-operatorsnamespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace forSingleNamespaceinstall mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplacefor the default OperatorHub catalog sources. - 6
- The
envparameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFromparameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumesparameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMountsparameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If avolumeMountreferences avolumethat does not exist, OLM fails to deploy the Operator. - 10
- The
tolerationsparameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resourcesparameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelectorparameter defines aNodeSelectorfor the pod created by OLM.
If the cluster is in STS mode, include the following fields in the
Subscriptionobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
- 2
- Include the role ARN details.
Create the
Subscriptionobject:oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
3.2.5. Installing a specific version of an Operator Copy linkLink copied to clipboard!
You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
You have installed the OpenShift CLI (
oc).
Procedure
Look up the available versions and channels of the Operator you want to install by running the following command:
Command syntax
oc describe packagemanifests <operator_name> -n <catalog_namespace>
$ oc describe packagemanifests <operator_name> -n <catalog_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub:
Example command
oc describe packagemanifests quay-operator -n openshift-marketplace
$ oc describe packagemanifests quay-operator -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can print an Operator’s version and channel information in the YAML format by running the following command:
oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
$ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml
$ oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you do not specify the Operator’s catalog, running the
oc get packagemanifestandoc describe packagemanifestcommands might return a package from an unexpected catalog if the following conditions are met:- Multiple catalogs are installed in the same namespace.
- The catalogs contain the same Operators or Operators with the same name.
An Operator group, defined by an
OperatorGroupobject, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespacesorSingleNamespacemode. If the Operator you intend to install uses theAllNamespacesmode, then theopenshift-operatorsnamespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespacemode and you do not already have an appropriate Operator group in place, you must create one:Create an
OperatorGroupobject YAML file, for exampleoperatorgroup.yaml:Example
OperatorGroupobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create the
OperatorGroupobject:oc apply -f operatorgroup.yaml
$ oc apply -f operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Subscriptionobject YAML file that subscribes a namespace to an Operator with a specific version by setting thestartingCSVfield. Set theinstallPlanApprovalfield toManualto prevent the Operator from automatically upgrading if a later version exists in the catalog.For example, the following
sub.yamlfile can be used to install the Red Hat Quay Operator specifically to version 3.7.10:Subscription with a specific starting Operator version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the approval strategy to
Manualin case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
Create the
Subscriptionobject:oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Manually approve the pending install plan to complete the Operator installation.
Chapter 4. Administrator tasks Copy linkLink copied to clipboard!
4.1. Adding Operators to a cluster Copy linkLink copied to clipboard!
Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OpenShift Container Platform cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.1.1. About Operator installation with OperatorHub Copy linkLink copied to clipboard!
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI.
During installation, you must determine the following initial settings for the Operator:
- Installation Mode
- Choose a specific namespace in which to install the Operator.
- Update Channel
- If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
- Approval Strategy
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
4.1.2. Installing from OperatorHub using the web console Copy linkLink copied to clipboard!
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
Procedure
- Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type
advancedto find the Advanced Cluster Management for Kubernetes Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
NoteChoosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.
- Read the information about the Operator and click Install.
On the Install Operator page:
Select one of the following:
-
All namespaces on the cluster (default) installs the Operator in the default
openshift-operatorsnamespace to watch and be made available to all namespaces in the cluster. This option is not always available. - A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
-
All namespaces on the cluster (default) installs the Operator in the default
- Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
If the cluster is in AWS STS mode, enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field.
To create the role’s ARN, follow the procedure described in Preparing AWS account.
- If more than one update channel is available, select an Update channel.
Select Automatic or Manual approval strategy, as described earlier.
ImportantIf the web console shows that the cluster is in "STS mode", you must set Update approval to Manual.
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
- If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.
NoteFor the All namespaces… installation mode, the status resolves to InstallSucceeded in the
openshift-operatorsnamespace, but the status is Copied if you check in other namespaces.If it does not:
-
Check the logs in any pods in the
openshift-operatorsproject (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
-
Check the logs in any pods in the
4.1.3. Installing from OperatorHub by using the CLI Copy linkLink copied to clipboard!
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
You have installed the OpenShift CLI (
oc).
Procedure
View the list of Operators available to the cluster from OperatorHub:
oc get packagemanifests -n openshift-marketplace
$ oc get packagemanifests -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
oc describe packagemanifests <operator_name> -n openshift-marketplace
$ oc describe packagemanifests <operator_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow An Operator group, defined by an
OperatorGroupobject, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespacesorSingleNamespacemode. If the Operator you intend to install uses theAllNamespacesmode, theopenshift-operatorsnamespace already has the appropriateglobal-operatorsOperator group in place.However, if the Operator uses the
SingleNamespacemode and you do not already have an appropriate Operator group in place, you must create one.Note-
The web console version of this procedure handles the creation of the
OperatorGroupandSubscriptionobjects automatically behind the scenes for you when choosingSingleNamespacemode. - You can only have one Operator group per namespace. For more information, see "Operator groups".
Create an
OperatorGroupobject YAML file, for exampleoperatorgroup.yaml:Example
OperatorGroupobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create the
OperatorGroupobject:oc apply -f operatorgroup.yaml
$ oc apply -f operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The web console version of this procedure handles the creation of the
Create a
Subscriptionobject YAML file to subscribe a namespace to an Operator, for examplesub.yaml:Example
SubscriptionobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For default
AllNamespacesinstall mode usage, specify theopenshift-operatorsnamespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace forSingleNamespaceinstall mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplacefor the default OperatorHub catalog sources. - 6
- The
envparameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFromparameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumesparameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMountsparameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If avolumeMountreferences avolumethat does not exist, OLM fails to deploy the Operator. - 10
- The
tolerationsparameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resourcesparameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelectorparameter defines aNodeSelectorfor the pod created by OLM.
If the cluster is in STS mode, include the following fields in the
Subscriptionobject:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
- 2
- Include the role ARN details.
Create the
Subscriptionobject:oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
4.1.4. Installing a specific version of an Operator Copy linkLink copied to clipboard!
You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object.
Prerequisites
- Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
-
You have installed the OpenShift CLI (
oc).
Procedure
Look up the available versions and channels of the Operator you want to install by running the following command:
Command syntax
oc describe packagemanifests <operator_name> -n <catalog_namespace>
$ oc describe packagemanifests <operator_name> -n <catalog_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub:
Example command
oc describe packagemanifests quay-operator -n openshift-marketplace
$ oc describe packagemanifests quay-operator -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can print an Operator’s version and channel information in the YAML format by running the following command:
oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
$ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml
$ oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you do not specify the Operator’s catalog, running the
oc get packagemanifestandoc describe packagemanifestcommands might return a package from an unexpected catalog if the following conditions are met:- Multiple catalogs are installed in the same namespace.
- The catalogs contain the same Operators or Operators with the same name.
An Operator group, defined by an
OperatorGroupobject, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespacesorSingleNamespacemode. If the Operator you intend to install uses theAllNamespacesmode, then theopenshift-operatorsnamespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespacemode and you do not already have an appropriate Operator group in place, you must create one:Create an
OperatorGroupobject YAML file, for exampleoperatorgroup.yaml:Example
OperatorGroupobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create the
OperatorGroupobject:oc apply -f operatorgroup.yaml
$ oc apply -f operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Subscriptionobject YAML file that subscribes a namespace to an Operator with a specific version by setting thestartingCSVfield. Set theinstallPlanApprovalfield toManualto prevent the Operator from automatically upgrading if a later version exists in the catalog.For example, the following
sub.yamlfile can be used to install the Red Hat Quay Operator specifically to version 3.7.10:Subscription with a specific starting Operator version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the approval strategy to
Manualin case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. - 2
- Set a specific version of an Operator CSV.
Create the
Subscriptionobject:oc apply -f sub.yaml
$ oc apply -f sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Manually approve the pending install plan to complete the Operator installation.
4.1.5. Installing a specific version of an Operator in the web console Copy linkLink copied to clipboard!
You can install a specific version of an Operator by using the OperatorHub in the web console. You are able to browse the various versions of an operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.
Prerequisites
- You must have administrator privileges.
Procedure
- From the web console, click Operators → OperatorHub.
- Select an Operator you want to install.
From the selected Operator, you can select a Channel and Version from the lists.
NoteThe version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise Manual approval is required when not installing the latest version for the selected channel.
Manual approval applies to all operators installed in a namespace.
Installing an Operator with manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. Install Operators into separate namespaces for updating independently.
- Click Install
Verification
When the operator is installed, the metadata indicates which channel and version are installed.
NoteThe channel and version dropdown menus are still available for viewing other version metadata in this catalog context.
4.1.6. Preparing for multiple instances of an Operator for multitenant clusters Copy linkLink copied to clipboard!
As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".
In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.
Prerequisites
All instances of the Operator you want to install must be the same version across a given cluster.
ImportantFor more information on this and other limitations, see "Operators in multitenant clusters".
Procedure
Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is
team1, you might create ateam1-operatornamespace:Define a
Namespaceresource and save the YAML file, for example,team1-operator.yaml:apiVersion: v1 kind: Namespace metadata: name: team1-operator
apiVersion: v1 kind: Namespace metadata: name: team1-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f team1-operator.yaml
$ oc create -f team1-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the
spec.targetNamespaceslist:Define an
OperatorGroupresource and save the YAML file, for example,team1-operatorgroup.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Define only the tenant’s namespace in the
spec.targetNamespaceslist.
Create the Operator group by running the following command:
oc create -f team1-operatorgroup.yaml
$ oc create -f team1-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, "Installing from OperatorHub using the web console".
NoteAfter completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.
4.1.7. Installing global Operators in custom namespaces Copy linkLink copied to clipboard!
When installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see "Multitenancy and Operator colocation".
As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:
Define a
Namespaceresource and save the YAML file, for example,global-operators.yaml:apiVersion: v1 kind: Namespace metadata: name: global-operators
apiVersion: v1 kind: Namespace metadata: name: global-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f global-operators.yaml
$ oc create -f global-operators.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a custom global Operator group, which is an Operator group that watches all namespaces:
Define an
OperatorGroupresource and save the YAML file, for example,global-operatorgroup.yaml. Omit both thespec.selectorandspec.targetNamespacesfields to make it a global Operator group, which selects all namespaces:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
status.namespacesof a created global Operator group contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.Create the Operator group by running the following command:
oc create -f global-operatorgroup.yaml
$ oc create -f global-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, the install task can only be performed with the OpenShift CLI (
oc). For a detailed installation procedure, see "Installing from OperatorHub by using the CLI".NoteWhen you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.
4.1.8. Pod placement of Operator workloads Copy linkLink copied to clipboard!
By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.
Controlling pod placement of Operator and Operand workloads has the following prerequisites:
-
Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as
node-role.kubernetes.io/app, that identifies the node or nodes. Otherwise, add a label, such asmyoperator, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project. -
If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a
myoperator:NoScheduletaint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. - Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.
At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:
- For Operator pods
-
Administrators can create a
Subscriptionobject in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes. - For Operand pods
- Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.
4.1.9. Controlling where an Operator is installed Copy linkLink copied to clipboard!
By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.
The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:
-
If an Operator requires a particular platform, such as
amd64orarm64 - If an Operator requires a particular operating system, such as Linux or Windows
- If you want Operators that work together scheduled on the same host or on hosts located on the same rack
- If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues
You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.
The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:
Node affinity example that places the Operator pod on a specific node
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node named
ip-10-0-163-94.us-west-2.compute.internal.
Node affinity example that places the Operator pod on a node with a specific platform
- 1
- A node affinity that requires the Operator’s pod to be scheduled on a node with the
kubernetes.io/arch=arm64andkubernetes.io/os=linuxlabels.
Pod affinity example that places the Operator pod on one or more specific nodes
- 1
- A pod affinity that places the Operator’s pod on a node that has pods with the
app=testlabel.
Pod anti-affinity example that prevents the Operator pod from one or more specific nodes
- 1
- A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the
cpu=highlabel.
Procedure
To control the placement of an Operator pod, complete the following steps:
- Install the Operator as usual.
- If needed, ensure that your nodes are labeled to properly respond to the affinity.
Edit the Operator
Subscriptionobject to add an affinity:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
nodeAffinity,podAffinity, orpodAntiAffinity. See the Additional resources section that follows for information about creating the affinity.
Verification
To ensure that the pod is deployed on the specific node, run the following command:
$ oc get pods -o wide
$ oc get pods -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Updating installed Operators Copy linkLink copied to clipboard!
As a cluster administrator, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.
4.2.1. Preparing for an Operator update Copy linkLink copied to clipboard!
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.
The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).
You cannot change installed Operators to a channel that is older than the current channel.
Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:
You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included.
4.2.2. Changing the update channel for an Operator Copy linkLink copied to clipboard!
You can change the update channel for an Operator by using the OpenShift Container Platform web console.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the web console, navigate to Operators → Installed Operators.
- Click the name of the Operator you want to change the update channel for.
- Click the Subscription tab.
- Click the name of the update channel under Update channel.
- Click the newer update channel that you want to change to, then click Save.
For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.
4.2.3. Manually approving a pending Operator update Copy linkLink copied to clipboard!
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
- Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
- Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
4.3. Deleting Operators from a cluster Copy linkLink copied to clipboard!
The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator.
For more information, see Reinstalling Operators after failed uninstallation.
4.3.1. Deleting Operators from a cluster using the web console Copy linkLink copied to clipboard!
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with
cluster-adminpermissions.
Procedure
- Navigate to the Operators → Installed Operators page.
- Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.3.2. Deleting Operators from a cluster using the CLI Copy linkLink copied to clipboard!
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
The OpenShift CLI (
oc) is installed on your workstation.
Procedure
Ensure the latest version of the subscribed operator (for example,
serverless-operator) is identified in thecurrentCSVfield.oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV
$ oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
currentCSV: serverless-operator.v1.28.0
currentCSV: serverless-operator.v1.28.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription (for example,
serverless-operator):oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless
$ oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverlessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
subscription.operators.coreos.com "serverless-operator" deleted
subscription.operators.coreos.com "serverless-operator" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the CSV for the Operator in the target namespace using the
currentCSVvalue from the previous step:oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless
$ oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverlessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted
clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Refreshing failing subscriptions Copy linkLink copied to clipboard!
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
SubscriptionandClusterServiceVersionobjects from the namespace where the Operator is installed:oc get sub,csv -n <namespace>
$ oc get sub,csv -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription:
oc delete subscription <subscription_name> -n <namespace>
$ oc delete subscription <subscription_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cluster service version:
oc delete csv <csv_name> -n <namespace>
$ oc delete csv <csv_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the names of any failing jobs and related config maps in the
openshift-marketplacenamespace:oc get job,configmap -n openshift-marketplace
$ oc get job,configmap -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the job:
oc delete job <job_name> -n openshift-marketplace
$ oc delete job <job_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
oc delete configmap <configmap_name> -n openshift-marketplace
$ oc delete configmap <configmap_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
oc get sub,csv,installplan -n <namespace>
$ oc get sub,csv,installplan -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Configuring Operator Lifecycle Manager features Copy linkLink copied to clipboard!
The Operator Lifecycle Manager (OLM) controller is configured by an OLMConfig custom resource (CR) named cluster. Cluster administrators can modify this resource to enable or disable certain features.
This document outlines the features currently supported by OLM that are configured by the OLMConfig resource.
4.4.1. Disabling copied CSVs Copy linkLink copied to clipboard!
When an Operator is installed by Operator Lifecycle Manager (OLM), a simplified copy of its cluster service version (CSV) is created by default in every namespace that the Operator is configured to watch. These CSVs are known as copied CSVs and communicate to users which controllers are actively reconciling resource events in a given namespace.
When an Operator is configured to use the AllNamespaces install mode, versus targeting a single or specified set of namespaces, a copied CSV for the Operator is created in every namespace on the cluster. On especially large clusters, with namespaces and installed Operators potentially in the hundreds or thousands, copied CSVs consume an untenable amount of resources, such as OLM’s memory usage, cluster etcd limits, and networking.
To support these larger clusters, cluster administrators can disable copied CSVs for Operators globally installed with the AllNamespaces mode.
If you disable copied CSVs, an Operator installed in AllNamespaces mode has their CSV copied only to the openshift namespace, instead of every namespace on the cluster. In disabled copied CSVs mode, the behavior differs between the web console and CLI:
-
In the web console, the default behavior is modified to show copied CSVs from the
openshiftnamespace in every namespace, even though the CSVs are not actually copied to every namespace. This allows regular users to still be able to view the details of these Operators in their namespaces and create related custom resources (CRs). In the OpenShift CLI (
oc), regular users can view Operators installed directly in their namespaces by using theoc get csvscommand, but the copied CSVs from theopenshiftnamespace are not visible in their namespaces. Operators affected by this limitation are still available and continue to reconcile events in the user’s namespace.To view a full list of installed global Operators, similar to the web console behavior, all authenticated users can run the following command:
oc get csvs -n openshift
$ oc get csvs -n openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Edit the
OLMConfigobject namedclusterand set thespec.features.disableCopiedCSVsfield totrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disabled copied CSVs for
AllNamespacesinstall mode Operators
Verification
When copied CSVs are disabled, OLM captures this information in an event in the Operator’s namespace:
oc get events
$ oc get eventsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0
LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the
spec.features.disableCopiedCSVsfield is missing or set tofalse, OLM recreates the copied CSVs for all Operators installed with theAllNamespacesmode and deletes the previously mentioned events.
Additional resources
4.5. Configuring proxy support in Operator Lifecycle Manager Copy linkLink copied to clipboard!
If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.
- Configuring a custom PKI (custom CA certificate)
- Developing Operators that support proxy settings for Go, Ansible, and Helm
4.5.1. Overriding proxy settings of an Operator Copy linkLink copied to clipboard!
If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator.
Operators must handle setting environment variables for proxy settings in the pods for any managed Operands.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions.
Procedure
- Navigate in the web console to the Operators → OperatorHub page.
- Select the Operator and click Install.
On the Install Operator page, modify the
Subscriptionobject to include one or more of the following environment variables in thespecsection:-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
For example:
Subscriptionobject with proxy setting overridesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.
OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator.
-
- Click Install to make the Operator available to the selected namespaces.
After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI:
oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2$ oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.2. Injecting a custom CA certificate Copy linkLink copied to clipboard!
When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - Custom CA certificate added to the cluster using a config map.
- Desired Operator installed and running on OLM.
Procedure
Create an empty config map in the namespace where the subscription for your Operator exists and include the following label:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After creating this config map, it is immediately populated with the certificate contents of the merged bundle.
Update the
Subscriptionobject to include aspec.configsection that mounts thetrusted-caconfig map as a volume to each container within a pod that requires a custom CA:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDeployments of an Operator can fail to validate the authority and display a
x509 certificate signed by unknown authorityerror. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set themountPathas/etc/ssl/certsfor trusted-ca by using the subscription of an Operator.
4.6. Viewing Operator status Copy linkLink copied to clipboard!
Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.
4.6.1. Operator subscription condition types Copy linkLink copied to clipboard!
Subscriptions can report the following condition types:
| Condition | Description |
|---|---|
|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
|
| An install plan for a subscription is missing. |
|
| An install plan for a subscription is pending installation. |
|
| An install plan for a subscription has failed. |
|
| The dependency resolution for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.
4.6.2. Viewing Operator subscription status by using the CLI Copy linkLink copied to clipboard!
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
List Operator subscriptions:
oc get subs -n <operator_namespace>
$ oc get subs -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describecommand to inspect aSubscriptionresource:oc describe sub <subscription_name> -n <operator_namespace>
$ oc describe sub <subscription_name> -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command output, find the
Conditionssection for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthycondition type has a status offalsebecause all available catalog sources are healthy:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.
4.6.3. Viewing Operator catalog source status by using the CLI Copy linkLink copied to clipboard!
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplacenamespace, which is used for cluster-wide catalog sources:oc get catalogsources -n openshift-marketplace
$ oc get catalogsources -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describecommand to get more details and status about a catalog source:oc describe catalogsource example-catalog -n openshift-marketplace
$ oc describe catalogsource example-catalog -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the last observed state is
TRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8zpod isImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describecommand to inspect a pod for more detailed information:oc describe pod example-catalog-bwt8z -n openshift-marketplace
$ oc describe pod example-catalog-bwt8z -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
4.7. Managing Operator conditions Copy linkLink copied to clipboard!
As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM).
4.7.1. Overriding Operator conditions Copy linkLink copied to clipboard!
As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides array override the conditions in the Spec.Conditions array, allowing cluster administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM).
By default, the Spec.Overrides array is not present in an OperatorCondition object until it is added by a cluster administrator . The Spec.Conditions array is also not present until it is either added by a user or as a result of custom Operator logic.
For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type and status to the Spec.Overrides array in the OperatorCondition object.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
An Operator with an
OperatorConditionobject, installed using OLM.
Procedure
Edit the
OperatorConditionobject for the Operator:oc edit operatorcondition <name>
$ oc edit operatorcondition <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
Spec.Overridesarray to the object:Example Operator condition override
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allows the cluster administrator to change the upgrade readiness to
True.
4.7.2. Updating your Operator to use Operator conditions Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition resource for each ClusterServiceVersion resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition owned by the Operator.
An Operator author can develop their Operator to use the operator-lib library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page.
4.7.2.1. Setting defaults Copy linkLink copied to clipboard!
In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true. This provides the Operator with a grace period to update the condition to the correct state.
4.8. Allowing non-cluster administrators to install Operators Copy linkLink copied to clipboard!
Cluster administrators can use Operator groups to allow regular users to install Operators.
4.8.1. Understanding Operator installation policy Copy linkLink copied to clipboard!
Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV), and OLM consequently grants it to the Operator.
To ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM, Cluster administrators can manually audit Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts.
Cluster administrators can associate an Operator group with a service account that has a set of privileges granted to it. The service account sets policy on Operators to ensure they only run within predetermined boundaries by using role-based access control (RBAC) rules. As a result, the Operator is unable to do anything that is not explicitly permitted by those rules.
By employing Operator groups, users with enough privileges can install Operators with a limited scope. As a result, more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators.
Role-based access control (RBAC) for Subscription objects is automatically granted to every user with the edit or admin role in a namespace. However, RBAC does not exist on OperatorGroup objects; this absence is what prevents regular users from installing Operators. Preinstalling Operator groups is effectively what gives installation privileges.
Keep the following points in mind when associating an Operator group with a service account:
-
The
APIServiceandCustomResourceDefinitionresources are always created by OLM using thecluster-adminrole. A service account associated with an Operator group should never be granted privileges to write these resources. - Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors so the cluster administrator can troubleshoot and resolve the issue.
4.8.1.1. Installation scenarios Copy linkLink copied to clipboard!
When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios:
- A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account.
- A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted.
- For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted.
- A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator.
- A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator.
- A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted.
4.8.1.2. Installation workflow Copy linkLink copied to clipboard!
When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow:
-
The given
Subscriptionobject is picked up by OLM. - OLM fetches the Operator group tied to this subscription.
- OLM determines that the Operator group has a service account specified.
- OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group.
- OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account.
4.8.2. Scoping Operator installations Copy linkLink copied to clipboard!
To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group.
Using this example, a cluster administrator can confine a set of Operators to a designated namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a new namespace:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s).
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OperatorGroupobject in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it.In addition, Operator groups allow a user to specify a service account. Specify the service account created in the previous step:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified.
WarningOperator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
-
<operatorgroup_name>-admin -
<operatorgroup_name>-edit -
<operatorgroup_name>-view
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
-
Create a
Subscriptionobject in the designated namespace to install an Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors.
4.8.2.1. Fine-grained permissions Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed:
-
ClusterServiceVersion -
Subscription -
Secret -
ServiceAccount -
Service -
ClusterRoleandClusterRoleBinding -
RoleandRoleBinding
To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account:
The following role is a generic example and additional rules might be required based on the specific Operator.
In addition, if any Operator specifies a pull secret, the following permissions must also be added:
- 1
- Required to get the secret from the OLM namespace.
4.8.3. Operator catalog access control Copy linkLink copied to clipboard!
When an Operator catalog is created in the global catalog namespace openshift-marketplace, the catalog’s Operators are made available cluster-wide to all namespaces. A catalog created in other namespaces only makes its Operators available in that same namespace of the catalog.
On clusters where non-cluster administrator users have been delegated Operator installation privileges, cluster administrators might want to further control or restrict the set of Operators those users are allowed to install. This can be achieved with the following actions:
- Disable all of the default global catalogs.
- Enable custom, curated catalogs in the same namespace where the relevant Operator groups have been preinstalled.
4.8.4. Troubleshooting permission failures Copy linkLink copied to clipboard!
If an Operator installation fails due to lack of permissions, identify the errors using the following procedure.
Procedure
Review the
Subscriptionobject. Its status has an object referenceinstallPlanRefthat points to theInstallPlanobject that attempted to create the necessary[Cluster]Role[Binding]object(s) for the Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the
InstallPlanobject for any errors:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The error message tells you:
-
The type of resource it failed to create, including the API group of the resource. In this case, it was
clusterrolesin therbac.authorization.k8s.iogroup. - The name of the resource.
-
The type of error:
is forbiddentells you that the user does not have enough permission to do the operation. - The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group.
The scope of the operation:
cluster scopeor not.The user can add the missing permission to the service account and then iterate.
NoteOperator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try.
-
The type of resource it failed to create, including the API group of the resource. In this case, it was
4.9. Managing custom catalogs Copy linkLink copied to clipboard!
Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API.
If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.
4.9.1. Prerequisites Copy linkLink copied to clipboard!
-
You have installed the
opmCLI.
4.9.2. File-based catalogs Copy linkLink copied to clipboard!
File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format and Mirroring images for a disconnected installation using the oc-mirror plugin.
4.9.2.1. Creating a file-based catalog image Copy linkLink copied to clipboard!
You can use the opm CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format.
Prerequisites
-
You have installed the
opmCLI. -
You have
podmanversion 1.9.3+. - A bundle image is built and pushed to a registry that supports Docker v2-2.
Procedure
Initialize the catalog:
Create a directory for the catalog by running the following command:
mkdir <catalog_dir>
$ mkdir <catalog_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a Dockerfile that can build a catalog image by running the
opm generate dockerfilecommand:opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 -i registry.redhat.io/openshift4/ose-operator-registry:v4.14$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.141 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the official Red Hat base image by using the
-iflag, otherwise the Dockerfile uses the default upstream image.
The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <catalog_dir> └── <catalog_dir>.Dockerfile
.1 ├── <catalog_dir>2 └── <catalog_dir>.Dockerfile3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Populate the catalog with the package definition for your Operator by running the
opm initcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command generates an
olm.packagedeclarative config blob in the specified catalog configuration file.
Add a bundle to the catalog by running the
opm rendercommand:opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ --output=yaml \ >> <catalog_dir>/index.yaml$ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \1 --output=yaml \ >> <catalog_dir>/index.yaml2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteChannels must contain at least one bundle.
Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your
<catalog_dir>/index.yamlfile:Example channel entry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that you include the period (
.) after<operator_name>but before thevin the version. Otherwise, the entry fails to pass theopm validatecommand.
Validate the file-based catalog:
Run the
opm validatecommand against the catalog directory:opm validate <catalog_dir>
$ opm validate <catalog_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the error code is
0:echo $?
$ echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
0
0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Build the catalog image by running the
podman buildcommand:podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the catalog image to a registry:
If required, authenticate with your target registry by running the
podman logincommand:podman login <registry>
$ podman login <registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the catalog image by running the
podman pushcommand:podman push <registry>/<namespace>/<catalog_image_name>:<tag>
$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.2.2. Updating or filtering a file-based catalog image Copy linkLink copied to clipboard!
You can use the opm CLI to update or filter (also known as prune) a catalog image that uses the file-based catalog format. By extracting and modifying the contents of an existing catalog image, you can update, add, or remove one or more Operator package entries from the catalog. You can then rebuild the image as an updated version of the catalog.
Alternatively, if you already have a catalog image on a mirror registry, you can use the oc-mirror CLI plugin to automatically prune any removed images from an updated source version of that catalog image while mirroring it to the target registry.
For more information about the oc-mirror plugin and this use case, see the "Keeping your mirror registry content updated" section, and specifically the "Pruning images" subsection, of "Mirroring images for a disconnected installation using the oc-mirror plugin".
Prerequisites
You have the following on your workstation:
-
The
opmCLI. -
podmanversion 1.9.3+. - A file-based catalog image.
A catalog directory structure recently initialized on your workstation related to this catalog.
If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure.
-
The
Procedure
Extract the contents of the catalog image in YAML format to an
index.yamlfile in your catalog directory:opm render <registry>/<namespace>/<catalog_image_name>:<tag> \ -o yaml > <catalog_dir>/index.yaml -o yaml > <catalog_dir>/index.yaml$ opm render <registry>/<namespace>/<catalog_image_name>:<tag> \ -o yaml > <catalog_dir>/index.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAlternatively, you can use the
-o jsonflag to output in JSON format.Modify the contents of the resulting
index.yamlfile to your specifications by updating, adding, or removing one or more Operator package entries.ImportantAfter a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed.
For example, if you wanted to remove an Operator package, the following example lists a set of
olm.package,olm.channel, andolm.bundleblobs which must be deleted to remove the package from the catalog:Example 4.2. Example removed entries
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save your changes to the
index.yamlfile. Validate the catalog:
opm validate <catalog_dir>
$ opm validate <catalog_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Rebuild the catalog:
podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>$ podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the updated catalog image to a registry:
podman push <registry>/<namespace>/<catalog_image_name>:<tag>
$ podman push <registry>/<namespace>/<catalog_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- In the web console, navigate to the OperatorHub configuration resource in the Administration → Cluster Settings → Configuration page.
Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image.
For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section.
- After the catalog source is in a READY state, navigate to the Operators → OperatorHub page and check that the changes you made are reflected in the list of Operators.
4.9.3. SQLite-based catalogs Copy linkLink copied to clipboard!
The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
4.9.3.1. Creating a SQLite-based index image Copy linkLink copied to clipboard!
You can create an index image based on the SQLite database format by using the opm CLI.
Prerequisites
-
You have installed the
opmCLI. -
You have
podmanversion 1.9.3+. - A bundle image is built and pushed to a registry that supports Docker v2-2.
Procedure
Start a new index:
opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \ --tag <registry>/<namespace>/<index_image_name>:<tag> \ [--binary-image <registry_base_image>]$ opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1 --tag <registry>/<namespace>/<index_image_name>:<tag> \2 [--binary-image <registry_base_image>]3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the index image to a registry.
If required, authenticate with your target registry:
podman login <registry>
$ podman login <registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the index image:
podman push <registry>/<namespace>/<index_image_name>:<tag>
$ podman push <registry>/<namespace>/<index_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.3.2. Updating a SQLite-based index image Copy linkLink copied to clipboard!
After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up-to-date by adding bundle images to the index image.
You can update an existing index image using the opm index add command.
Prerequisites
-
You have installed the
opmCLI. -
You have
podmanversion 1.9.3+. - An index image is built and pushed to a registry.
- You have an existing catalog source referencing the index image.
Procedure
Update the existing index by adding bundle images:
opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ --pull-tool podman$ opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \3 --pull-tool podman4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
--bundlesflag specifies a comma-separated list of additional bundle images to add to the index. - 2
- The
--from-indexflag specifies the previously pushed index. - 3
- The
--tagflag specifies the image tag to apply to the updated index image. - 4
- The
--pull-toolflag specifies the tool used to pull container images.
where:
<registry>-
Specifies the hostname of the registry, such as
quay.ioormirror.example.com. <namespace>-
Specifies the namespace of the registry, such as
ocs-devorabc. <new_bundle_image>-
Specifies the new bundle image to add to the registry, such as
ocs-operator. <digest>-
Specifies the SHA image ID, or digest, of the bundle image, such as
c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41. <existing_index_image>-
Specifies the previously pushed image, such as
abc-redhat-operator-index. <existing_tag>-
Specifies a previously pushed image tag, such as
4.14. <updated_tag>-
Specifies the image tag to apply to the updated index image, such as
4.14.1.
Example command
opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.14 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.14.1 \ --pull-tool podman$ opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.14 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.14.1 \ --pull-tool podmanCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the updated index image:
podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
$ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:
oc get packagemanifests -n openshift-marketplace
$ oc get packagemanifests -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.3.3. Filtering a SQLite-based index image Copy linkLink copied to clipboard!
An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.
Prerequisites
-
You have
podmanversion 1.9.3+. -
You have
grpcurl(third-party command-line tool). -
You have installed the
opmCLI. - You have access to a registry that supports Docker v2-2.
Procedure
Authenticate with your target registry:
podman login <target_registry>
$ podman login <target_registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the list of packages you want to include in your pruned index.
Run the source index image that you want to prune in a container. For example:
podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.14$ podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.14Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.14... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051
Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.14... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051Copy to Clipboard Copied! Toggle word wrap Toggle overflow In a separate terminal session, use the
grpcurlcommand to get a list of the packages provided by the index:grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the
packages.outfile and identify which package names from this list you want to keep in your pruned index. For example:Example snippets of packages list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the terminal session where you executed the
podman runcommand, press Ctrl and C to stop the container process.
Run the following command to prune the source index of all but the specified packages:
opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.14 \ -p advanced-cluster-management,jaeger-product,quay-operator \ [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \ -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.14$ opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.14 \1 -p advanced-cluster-management,jaeger-product,quay-operator \2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.144 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to push the new index image to your target registry:
podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.14
$ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.14Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<namespace>is any existing namespace on the registry.
4.9.4. Catalog sources and pod security admission Copy linkLink copied to clipboard!
Pod security admission was introduced in OpenShift Container Platform 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the opm CLI tool released before OpenShift Container Platform 4.11 cannot run under restricted pod security enforcement.
In OpenShift Container Platform 4.14, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy.
Default restricted enforcement for all namespaces is planned for inclusion in a future OpenShift Container Platform release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.
If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in OpenShift Container Platform 4.14.
However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future OpenShift Container Platform releases.
As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:
- Migrate your catalog to the file-based catalog format.
-
Update your catalog image with a version of the
opmCLI tool released with OpenShift Container Platform 4.11 or later.
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.
4.9.4.1. Migrating SQLite database catalogs to the file-based catalog format Copy linkLink copied to clipboard!
You can update your deprecated SQLite database format catalogs to the file-based catalog format.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have the latest version of the
opmCLI tool released with OpenShift Container Platform 4.14 on your workstation.
Procedure
Migrate your SQLite database catalog to a file-based catalog by running the following command:
opm migrate <registry_image> <fbc_directory>
$ opm migrate <registry_image> <fbc_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a Dockerfile for your file-based catalog by running the following command:
opm generate dockerfile <fbc_directory> \ --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.14
$ opm generate dockerfile <fbc_directory> \ --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.14Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- The generated Dockerfile can be built, tagged, and pushed to your registry.
4.9.4.2. Rebuilding SQLite database catalog images Copy linkLink copied to clipboard!
You can rebuild your SQLite database catalog image with the latest version of the opm CLI tool that is released with your version of OpenShift Container Platform.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have the latest version of the
opmCLI tool released with OpenShift Container Platform 4.14 on your workstation.
Procedure
Run the following command to rebuild your catalog with a more recent version of the
opmCLI tool:opm index add --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.14 \ --from-index <your_registry_image> \ --bundles "" -t \<your_registry_image>
$ opm index add --binary-image \ registry.redhat.io/openshift4/ose-operator-registry:v4.14 \ --from-index <your_registry_image> \ --bundles "" -t \<your_registry_image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.4.3. Configuring catalogs to run with elevated permissions Copy linkLink copied to clipboard!
If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:
- Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.
- Label the catalog source namespace for baseline or privileged pod security enforcement.
The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.
Prerequisites
- You have a SQLite database catalog source.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have a target namespace that supports running pods with the elevated pod security admission standard of
baselineorprivileged.
Procedure
Edit the
CatalogSourcedefinition by setting thespec.grpcPodConfig.securityContextConfiglabel tolegacy, as shown in the following example:Example
CatalogSourcedefinitionCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipIn OpenShift Container Platform 4.14, the
spec.grpcPodConfig.securityContextConfigfield is set tolegacyby default. In a future release of OpenShift Container Platform, it is planned that the default setting will change torestricted. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field tolegacy.Edit your
<namespace>.yamlfile to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:Example
<namespace>.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Turn off pod security label synchronization by adding the
security.openshift.io/scc.podSecurityLabelSync=falselabel to the namespace. - 2
- Apply the pod security admission
pod-security.kubernetes.io/enforcelabel. Set the label tobaselineorprivileged. Use thebaselinepod security profile unless other workloads in the namespace require aprivilegedprofile.
4.9.5. Adding a catalog source to a cluster Copy linkLink copied to clipboard!
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
Prerequisites
- You built and pushed an index image to a registry.
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a
CatalogSourceobject that references your index image.Modify the following to your specifications and save it as a
catalogSource.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplacenamespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 2
- Optional: Set the
olm.catalogImageTemplateannotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. - 3
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy. - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4.14, the catalog source pod uses an image pull policy ofAlways, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>, the image pull policy isIfNotPresent, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSourceobject:oc apply -f catalogSource.yaml
$ oc apply -f catalogSource.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the following resources are created successfully.
Check the pods:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the catalog source:
oc get catalogsource -n openshift-marketplace
$ oc get catalogsource -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the package manifest:
oc get packagemanifest -n openshift-marketplace
$ oc get packagemanifest -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
NAME CATALOG AGE jaeger-product My Operator Catalog 93sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.9.6. Accessing images for Operators from private registries Copy linkLink copied to clipboard!
If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation.
Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access.
The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access:
- Index images
-
A
CatalogSourceobject can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access. - Bundle images
- Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access.
- Operator and Operand images
If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed.
Instead, the authentication details can be added to the global cluster pull secret in the
openshift-confignamespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to thedefaultservice accounts of the target tenant namespaces.
Prerequisites
You have at least one of the following hosted in a private registry:
- An index image or catalog image.
- An Operator bundle image.
- An Operator or Operand image.
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a secret for each required private registry.
Log in to the private registry to create or update your registry credentials file:
podman login <registry>:<port>
$ podman login <registry>:<port>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the
podmanCLI, the default location is${XDG_RUNTIME_DIR}/containers/auth.json. For thedockerCLI, the default location is/root/.docker/config.json.It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a
CatalogSourceobject in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull.A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example:
File storing credentials for multiple registries
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods:
-
Use the
podman logout <registry>command to remove credentials for additional registries until only the one registry you want remains. Edit your registry credentials file and separate the registry details to be stored in multiple files. For example:
File storing credentials for one registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow File storing credentials for another registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use the
Create a secret in the
openshift-marketplacenamespace that contains the authentication credentials for a private registry:oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson$ oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step to create additional secrets for any other required private registries, updating the
--from-fileflag to specify another registry credentials file path.
Create or update an existing
CatalogSourceobject to reference one or more secrets:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add a
spec.secretssection and specify any required secrets. - 2
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy.
If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces.
To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the
openshift-confignamespace.WarningCluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster.
Extract the
.dockerconfigjsonfile from the global pull secret:oc extract secret/pull-secret -n openshift-config --confirm
$ oc extract secret/pull-secret -n openshift-config --confirmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
.dockerconfigjsonfile with your authentication credentials for the required private registry or registries and save it as a new file:cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \ > new_dockerconfigjson > new_dockerconfigjson$ cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \1 > new_dockerconfigjsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<registry>:<port>/<namespace>with the private registry details and<token>with your authentication credentials.
Update the global pull secret with the new file:
oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson$ oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace.
Recreate the secret that you created for the
openshift-marketplacein the tenant namespace:oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson$ oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the name of the service account for the Operator by searching the tenant namespace:
oc get sa -n <tenant_namespace>
$ oc get sa -n <tenant_namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the
openshift-operatorsnamespace.
Example output
NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s
NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Service account for an installed etcd Operator.
Link the secret to the service account for the Operator:
oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull$ oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pullCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.7. Disabling the default OperatorHub catalog sources Copy linkLink copied to clipboard!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
4.9.8. Removing custom catalogs Copy linkLink copied to clipboard!
As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
- In the Administrator perspective of the web console, navigate to Administration → Cluster Settings.
- Click the Configuration tab, and then click OperatorHub.
- Click the Sources tab.
-
Select the Options menu
for the catalog that you want to remove, and then click Delete CatalogSource.
4.10. Using Operator Lifecycle Manager on restricted networks Copy linkLink copied to clipboard!
For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity.
However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry.
The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped, host, which requires removable media to physically move the mirrored content to the disconnected environment.
This guide describes the following process that is required to enable OLM in restricted networks:
- Disable the default remote OperatorHub sources for OLM.
- Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry.
- Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources.
After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.
While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself meeting the following criteria:
-
List any related images, or other container images that the Operator might require to perform their functions, in the
relatedImagesparameter of itsClusterServiceVersion(CSV) object. - Reference all specified images by a digest (SHA) and not by a tag.
You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections:
| Type | Containerized application |
| Deployment method | Operator |
| Infrastructure features | Disconnected |
4.10.1. Prerequisites Copy linkLink copied to clipboard!
-
Log in to your OpenShift Container Platform cluster as a user with
cluster-adminprivileges.
If you are using OLM in a restricted network on IBM Z®, you must have at least 12 GB allocated to the directory where you place your registry.
4.10.2. Disabling the default OperatorHub catalog sources Copy linkLink copied to clipboard!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
4.10.3. Mirroring an Operator catalog Copy linkLink copied to clipboard!
For instructions about mirroring Operator catalogs for use with disconnected clusters, see Installing → Mirroring images for a disconnected installation.
As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format.
The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format, Managing custom catalogs, and Mirroring images for a disconnected installation using the oc-mirror plugin.
4.10.4. Adding a catalog source to a cluster Copy linkLink copied to clipboard!
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
Prerequisites
- You built and pushed an index image to a registry.
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a
CatalogSourceobject that references your index image. If you used theoc adm catalog mirrorcommand to mirror your catalog to a target registry, you can use the generatedcatalogSource.yamlfile in your manifests directory as a starting point.Modify the following to your specifications and save it as a
catalogSource.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you mirrored content to local files before uploading to a registry, remove any backslash (
/) characters from themetadata.namefield to avoid an "invalid resource name" error when you create the object. - 2
- If you want the catalog source to be available globally to users in all namespaces, specify the
openshift-marketplacenamespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. - 3
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy. - 4
- Specify your index image. If you specify a tag after the image name, for example
:v4.14, the catalog source pod uses an image pull policy ofAlways, meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example@sha256:<id>, the image pull policy isIfNotPresent, meaning the pod pulls the image only if it does not already exist on the node. - 5
- Specify your name or an organization name publishing the catalog.
- 6
- Catalog sources can automatically check for new versions to keep up to date.
Use the file to create the
CatalogSourceobject:oc apply -f catalogSource.yaml
$ oc apply -f catalogSource.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the following resources are created successfully.
Check the pods:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the catalog source:
oc get catalogsource -n openshift-marketplace
$ oc get catalogsource -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s
NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the package manifest:
oc get packagemanifest -n openshift-marketplace
$ oc get packagemanifest -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CATALOG AGE jaeger-product My Operator Catalog 93s
NAME CATALOG AGE jaeger-product My Operator Catalog 93sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
4.11. Catalog source pod scheduling Copy linkLink copied to clipboard!
When an Operator Lifecycle Manager (OLM) catalog source of source type grpc defines a spec.image, the Catalog Operator creates a pod that serves the defined image content. By default, this pod defines the following in its specification:
-
Only the
kubernetes.io/os=linuxnode selector. -
The default priority class name:
system-cluster-critical. - No tolerations.
As an administrator, you can override these values by modifying fields in the CatalogSource object’s optional spec.grpcPodConfig section.
The Marketplace Operator, openshift-marketplace, manages the default OperatorHub custom resource’s (CR). This CR manages CatalogSource objects. If you attempt to modify fields in the CatalogSource object’s spec.grpcPodConfig section, the Marketplace Operator automatically reverts these modifications.By default, if you modify fields in the spec.grpcPodConfig section of the CatalogSource object, the Marketplace Operator automatically reverts these changes.
To apply persistent changes to CatalogSource object, you must first disable a default CatalogSource object.
4.11.1. Disabling default CatalogSource objects at a local level Copy linkLink copied to clipboard!
You can apply persistent changes to a CatalogSource object, such as catalog source pods, at a local level, by disabling a default CatalogSource object. Consider the default configuration in situations where the default CatalogSource object’s configuration does not meet your organization’s needs. By default, if you modify fields in the spec.grpcPodConfig section of the CatalogSource object, the Marketplace Operator automatically reverts these changes.
The Marketplace Operator, openshift-marketplace, manages the default custom resources (CRs) of the OperatorHub. The OperatorHub manages CatalogSource objects.
To apply persistent changes to CatalogSource object, you must first disable a default CatalogSource object.
Procedure
To disable all the default
CatalogSourceobjects at a local level, enter the following command:oc patch operatorhub cluster -p '{"spec": {"disableAllDefaultSources": true}}' --type=merge$ oc patch operatorhub cluster -p '{"spec": {"disableAllDefaultSources": true}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also configure the default
OperatorHubCR to either disable allCatalogSourceobjects or disable a specific object.
4.11.2. Overriding the node selector for catalog source pods Copy linkLink copied to clipboard!
Prerequisites
-
A
CatalogSourceobject of source typegrpcwithspec.imageis defined.
Procedure
Edit the
CatalogSourceobject and add or modify thespec.grpcPodConfigsection to include the following:grpcPodConfig: nodeSelector: custom_label: <label>grpcPodConfig: nodeSelector: custom_label: <label>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<label>is the label for the node selector that you want catalog source pods to use for scheduling.
4.11.3. Overriding the priority class name for catalog source pods Copy linkLink copied to clipboard!
Prerequisites
-
A
CatalogSourceobject of source typegrpcwithspec.imageis defined.
Procedure
Edit the
CatalogSourceobject and add or modify thespec.grpcPodConfigsection to include the following:grpcPodConfig: priorityClassName: <priority_class>grpcPodConfig: priorityClassName: <priority_class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<priority_class>is one of the following:-
One of the default priority classes provided by Kubernetes:
system-cluster-criticalorsystem-node-critical -
An empty set (
"") to assign the default priority - A pre-existing and custom defined priority class
-
One of the default priority classes provided by Kubernetes:
Previously, the only pod scheduling parameter that could be overriden was priorityClassName. This was done by adding the operatorframework.io/priorityclass annotation to the CatalogSource object. For example:
If a CatalogSource object defines both the annotation and spec.grpcPodConfig.priorityClassName, the annotation takes precedence over the configuration parameter.
4.11.4. Overriding tolerations for catalog source pods Copy linkLink copied to clipboard!
Prerequisites
-
A
CatalogSourceobject of source typegrpcwithspec.imageis defined.
Procedure
Edit the
CatalogSourceobject and add or modify thespec.grpcPodConfigsection to include the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12. Managing platform Operators (Technology Preview) Copy linkLink copied to clipboard!
A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster administrator, you can manage platform Operators by using the PlatformOperator API.
The platform Operator type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.12.1. About platform Operators Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) introduces a new type of Operator called platform Operators. A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster administrator, you can use platform Operators to further customize your OpenShift Container Platform installation to meet your requirements and use cases.
Using the existing cluster capabilities feature in OpenShift Container Platform, cluster administrators can already disable a subset of Cluster Version Operator-based (CVO) components considered non-essential to the initial payload prior to cluster installation. Platform Operators iterate on this model by providing additional customization options. Through the platform Operator mechanism, which relies on resources from the RukPak component, OLM-based Operators can now be installed at cluster installation time and can block cluster rollout if the Operator fails to install successfully.
In OpenShift Container Platform 4.14, this Technology Preview release focuses on the basic platform Operator mechanism and builds a foundation for expanding the concept in upcoming releases. You can use the cluster-wide PlatformOperator API to configure Operators before or after cluster creation on clusters that have enabled the TechPreviewNoUpgrade feature set.
4.12.1.1. Technology Preview restrictions for platform Operators Copy linkLink copied to clipboard!
During the Technology Preview release of the platform Operators feature in OpenShift Container Platform 4.14, the following restrictions determine whether an Operator can be installed through the platform Operators mechanism:
-
Kubernetes manifests must be packaged using the Operator Lifecycle Manager (OLM)
registry+v1bundle format. - The Operator cannot declare package or group/version/kind (GVK) dependencies.
-
The Operator cannot specify cluster service version (CSV) install modes other than
AllNamespaces -
The Operator cannot specify any
WebhookorAPIServicedefinitions. -
All package bundles must be in the
redhat-operatorscatalog source.
After considering these restrictions, the following Operators can be successfully installed:
| 3scale-operator | amq-broker-rhel8 |
| amq-online | amq-streams |
| ansible-cloud-addons-operator | apicast-operator |
| container-security-operator | eap |
| file-integrity-operator | gatekeeper-operator-product |
| integration-operator | jws-operator |
| kiali-ossm | node-healthcheck-operator |
| odf-csi-addons-operator | odr-hub-operator |
| openshift-custom-metrics-autoscaler-operator | openshift-gitops-operator |
| openshift-pipelines-operator-rh | quay-operator |
| red-hat-camel-k | rhpam-kogito-operator |
| service-registry-operator | servicemeshoperator |
| skupper-operator |
The following features are not available during this Technology Preview release:
- Automatically upgrading platform Operator packages after cluster rollout
- Extending the platform Operator mechanism to support any optional, CVO-based components
4.12.2. Prerequisites Copy linkLink copied to clipboard!
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. The
TechPreviewNoUpgradefeature set enabled on the cluster.WarningEnabling the
TechPreviewNoUpgradefeature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
Only the
redhat-operatorscatalog source enabled on the cluster. This is a restriction during the Technology Preview release. -
The
occommand installed on your workstation.
4.12.3. Installing platform Operators during cluster creation Copy linkLink copied to clipboard!
As a cluster administrator, you can install platform Operators by providing FeatureGate and PlatformOperator manifests during cluster creation.
Procedure
- Choose a platform Operator from the supported set of OLM-based Operators. For the list of this set and details on current limitations, see "Technology Preview restrictions for platform Operators".
-
Select a cluster installation method and follow the instructions through creating an
install-config.yamlfile. For more details on preparing for a cluster installation, see "Selecting a cluster installation method and preparing it for users". After you have created the
install-config.yamlfile and completed any modifications to it, change to the directory that contains the installation program and create the manifests:./openshift-install create manifests --dir <installation_directory>
$ ./openshift-install create manifests --dir <installation_directory>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a
FeatureGateobject YAML file in the<installation_directory>/manifests/directory that enables theTechPreviewNoUpgradefeature set, for example afeature-gate.yamlfile:Example
feature-gate.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enable the
TechPreviewNoUpgradefeature set.
Create a
PlatformOperatorobject YAML file for your chosen platform Operator in the<installation_directory>/manifests/directory, for example aservice-mesh-po.yamlfile for the Red Hat OpenShift Service Mesh Operator:Example
service-mesh-po.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you are ready to complete the cluster install, refer to your chosen installation method and continue through running the
openshift-install create clustercommand.During cluster creation, your provided manifests are used to enable the
TechPreviewNoUpgradefeature set and install your chosen platform Operator.ImportantFailure of the platform Operator to successfully install will block the cluster installation process.
Verification
Check the status of the
service-mesh-poplatform Operator by running the following command:oc get platformoperator service-mesh-po -o yaml
$ oc get platformoperator service-mesh-po -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Wait until the
Installedstatus condition reportsTrue.
Verify that the
platform-operators-aggregatedcluster Operator is reporting anAvailable=Truestatus:oc get clusteroperator platform-operators-aggregated -o yaml
$ oc get clusteroperator platform-operators-aggregated -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.4. Installing platform Operators after cluster creation Copy linkLink copied to clipboard!
As a cluster administrator, you can install platform Operators after cluster creation on clusters that have enabled the TechPreviewNoUpgrade feature set by using the cluster-wide PlatformOperator API.
Procedure
- Choose a platform Operator from the supported set of OLM-based Operators. For the list of this set and details on current limitations, see "Technology Preview restrictions for platform Operators".
Create a
PlatformOperatorobject YAML file for your chosen platform Operator, for example aservice-mesh-po.yamlfile for the Red Hat OpenShift Service Mesh Operator:Example
sevice-mesh-po.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
PlatformOperatorobject by running the following command:oc apply -f service-mesh-po.yaml
$ oc apply -f service-mesh-po.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf your cluster does not have the
TechPreviewNoUpgradefeature set enabled, the object creation fails with the following message:error: resource mapping not found for name: "service-mesh-po" namespace: "" from "service-mesh-po.yaml": no matches for kind "PlatformOperator" in version "platform.openshift.io/v1alpha1" ensure CRDs are installed first
error: resource mapping not found for name: "service-mesh-po" namespace: "" from "service-mesh-po.yaml": no matches for kind "PlatformOperator" in version "platform.openshift.io/v1alpha1" ensure CRDs are installed firstCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the status of the
service-mesh-poplatform Operator by running the following command:oc get platformoperator service-mesh-po -o yaml
$ oc get platformoperator service-mesh-po -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Wait until the
Installedstatus condition reportsTrue.
Verify that the
platform-operators-aggregatedcluster Operator is reporting anAvailable=Truestatus:oc get clusteroperator platform-operators-aggregated -o yaml
$ oc get clusteroperator platform-operators-aggregated -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.5. Deleting platform Operators Copy linkLink copied to clipboard!
As a cluster administrator, you can delete existing platform Operators. Operator Lifecycle Manager (OLM) performs a cascading deletion. First, OLM removes the bundle deployment for the platform Operator, which then deletes any objects referenced in the registry+v1 type bundle.
The platform Operator manager and bundle deployment provisioner only manage objects that are referenced in the bundle, but not objects subsequently deployed by any bundle workloads themselves. For example, if a bundle workload creates a namespace and the Operator is not configured to clean it up before the Operator is removed, it is outside of the scope of OLM to remove the namespace during platform Operator deletion.
Procedure
Get a list of installed platform Operators and find the name for the Operator you want to delete:
oc get platformoperator
$ oc get platformoperatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
PlatformOperatorresource for the chosen Operator, for example, for the Quay Operator:oc delete platformoperator quay-operator
$ oc delete platformoperator quay-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
platformoperator.platform.openshift.io "quay-operator" deleted
platformoperator.platform.openshift.io "quay-operator" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the namespace for the platform Operator is eventually deleted, for example, for the Quay Operator:
oc get ns quay-operator-system
$ oc get ns quay-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Error from server (NotFound): namespaces "quay-operator-system" not found
Error from server (NotFound): namespaces "quay-operator-system" not foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the
platform-operators-aggregatedcluster Operator continues to report anAvailable=Truestatus:oc get co platform-operators-aggregated
$ oc get co platform-operators-aggregatedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.14.0-0 True False False 70s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.14.0-0 True False False 70sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.13. Troubleshooting Operator issues Copy linkLink copied to clipboard!
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
4.13.1. Operator subscription condition types Copy linkLink copied to clipboard!
Subscriptions can report the following condition types:
| Condition | Description |
|---|---|
|
| Some or all of the catalog sources to be used in resolution are unhealthy. |
|
| An install plan for a subscription is missing. |
|
| An install plan for a subscription is pending installation. |
|
| An install plan for a subscription has failed. |
|
| The dependency resolution for a subscription has failed. |
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.
4.13.2. Viewing Operator subscription status by using the CLI Copy linkLink copied to clipboard!
You can view Operator subscription status by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
List Operator subscriptions:
oc get subs -n <operator_namespace>
$ oc get subs -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describecommand to inspect aSubscriptionresource:oc describe sub <subscription_name> -n <operator_namespace>
$ oc describe sub <subscription_name> -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the command output, find the
Conditionssection for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthycondition type has a status offalsebecause all available catalog sources are healthy:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.
4.13.3. Viewing Operator catalog source status by using the CLI Copy linkLink copied to clipboard!
You can view the status of an Operator catalog source by using the CLI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
List the catalog sources in a namespace. For example, you can check the
openshift-marketplacenamespace, which is used for cluster-wide catalog sources:oc get catalogsources -n openshift-marketplace
$ oc get catalogsources -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc describecommand to get more details and status about a catalog source:oc describe catalogsource example-catalog -n openshift-marketplace
$ oc describe catalogsource example-catalog -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the last observed state is
TRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source.List the pods in the namespace where your catalog source was created:
oc get pods -n openshift-marketplace
$ oc get pods -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8zpod isImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image.Use the
oc describecommand to inspect a pod for more detailed information:oc describe pod example-catalog-bwt8z -n openshift-marketplace
$ oc describe pod example-catalog-bwt8z -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
4.13.4. Querying Operator pod status Copy linkLink copied to clipboard!
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc).
Procedure
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
oc get clusteroperators
$ oc get clusteroperatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
oc get pod -n <operator_namespace>
$ oc get pod -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output a detailed Operator pod summary:
oc describe pod <operator_pod_name> -n <operator_namespace>
$ oc describe pod <operator_pod_name> -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If an Operator issue is node-specific, query Operator container status on that node.
Start a debug pod for the node:
oc debug node/my-node
$ oc debug node/my-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>instead.List details about the node’s containers, including state and associated pod IDs:
crictl ps
# crictl psCopy to Clipboard Copied! Toggle word wrap Toggle overflow List information about a specific Operator container on the node. The following example lists information about the
network-operatorcontainer:crictl ps --name network-operator
# crictl ps --name network-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Exit from the debug shell.
4.13.5. Gathering Operator logs Copy linkLink copied to clipboard!
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc). - You have the fully qualified domain names of the control plane or control plane machines.
Procedure
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
oc get pods -n <operator_namespace>
$ oc get pods -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review logs for an Operator pod:
oc logs pod/<pod_name> -n <operator_namespace>
$ oc logs pod/<pod_name> -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>with appropriate values.List pods on each control plane node:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator pods not showing a
Readystatus, inspect the pod’s status in detail. Replace<operator_pod_id>with the Operator pod’s ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow List containers related to an Operator pod:
ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For any Operator container not showing a
Readystatus, inspect the container’s status in detail. Replace<container_id>with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the logs for any Operator containers not showing a
Readystatus. Replace<container_id>with a container ID listed in the output of the preceding command:ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gatherand otheroccommands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>.
4.13.6. Disabling the Machine Config Operator from automatically rebooting Copy linkLink copied to clipboard!
When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused.
When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:
-
Changes to the SSH key in the
spec.config.passwd.users.sshAuthorizedKeysparameter of a machine config. -
Changes to the global pull secret or pull secret in the
openshift-confignamespace. -
Automatic rotation of the
/etc/kubernetes/kubelet-ca.crtcertificate authority (CA) by the Kubernetes API Server Operator.
-
Changes to the SSH key in the
When the MCO detects changes to the
/etc/containers/registries.conffile, such as editing anImageDigestMirrorSet,ImageTagMirrorSet, orImageContentSourcePolicyobject, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:-
The addition of a registry with the
pull-from-mirror = "digest-only"parameter set for each mirror. -
The addition of a mirror with the
pull-from-mirror = "digest-only"parameter set in a registry. -
The addition of items to the
unqualified-search-registrieslist.
-
The addition of a registry with the
To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config.
4.13.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console Copy linkLink copied to clipboard!
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
See second NOTE in Disabling the Machine Config Operator from automatically rebooting.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - Click Compute → MachineConfigPools.
- On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.pausedfield totrue.Sample MachineConfigPool object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
spec.pausedfield totrueto pause rebooting.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports True for the MCP you modified.
If the MCP has pending changes while paused, the Updated column is False and Updating is False. When Updated is True and Updating is False, there are no pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
-
Log in to the OpenShift Container Platform web console as a user with the
Unpause the autoreboot process:
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. - Click Compute → MachineConfigPools.
- On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
- On the master or worker page, click YAML.
In the YAML, update the
spec.pausedfield tofalse.Sample MachineConfigPool object
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
spec.pausedfield tofalseto allow rebooting.
NoteBy unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports False for the MCP you modified.
If the MCP is applying any pending changes, the Updated column is False and the Updating column is True. When Updated is True and Updating is False, there are no further changes being made.
-
Log in to the OpenShift Container Platform web console as a user with the
4.13.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI Copy linkLink copied to clipboard!
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
See second NOTE in Disabling the Machine Config Operator from automatically rebooting.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
To pause or unpause automatic MCO update rebooting:
Pause the autoreboot process:
Update the
MachineConfigPoolcustom resource to set thespec.pausedfield totrue.Control plane (master) nodes
oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/workerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the MCP is paused:
Control plane (master) nodes
oc get machineconfigpool/master --template='{{.spec.paused}}'$ oc get machineconfigpool/master --template='{{.spec.paused}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc get machineconfigpool/worker --template='{{.spec.paused}}'$ oc get machineconfigpool/worker --template='{{.spec.paused}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
true
trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.pausedfield istrueand the MCP is paused.Determine if the MCP has pending changes:
oc get machineconfigpool
# oc get machineconfigpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False
NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False FalseCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the UPDATED column is False and UPDATING is False, there are pending changes. When UPDATED is True and UPDATING is False, there are no pending changes. In the previous example, the worker node has pending changes. The control plane node does not have any pending changes.
ImportantIf there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
Unpause the autoreboot process:
Update the
MachineConfigPoolcustom resource to set thespec.pausedfield tofalse.Control plane (master) nodes
oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/workerCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
Verify that the MCP is unpaused:
Control plane (master) nodes
oc get machineconfigpool/master --template='{{.spec.paused}}'$ oc get machineconfigpool/master --template='{{.spec.paused}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Worker nodes
oc get machineconfigpool/worker --template='{{.spec.paused}}'$ oc get machineconfigpool/worker --template='{{.spec.paused}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
false
falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.pausedfield isfalseand the MCP is unpaused.Determine if the MCP has pending changes:
oc get machineconfigpool
$ oc get machineconfigpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True. When UPDATED is True and UPDATING is False, there are no further changes being made. In the previous example, the MCO is updating the worker node.
4.13.7. Refreshing failing subscriptions Copy linkLink copied to clipboard!
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors:
Example output
ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
Prerequisites
- You have a failing subscription that is unable to pull an inaccessible bundle image.
- You have confirmed that the correct bundle image is accessible.
Procedure
Get the names of the
SubscriptionandClusterServiceVersionobjects from the namespace where the Operator is installed:oc get sub,csv -n <namespace>
$ oc get sub,csv -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded
NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription:
oc delete subscription <subscription_name> -n <namespace>
$ oc delete subscription <subscription_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cluster service version:
oc delete csv <csv_name> -n <namespace>
$ oc delete csv <csv_name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the names of any failing jobs and related config maps in the
openshift-marketplacenamespace:oc get job,configmap -n openshift-marketplace
$ oc get job,configmap -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s
NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the job:
oc delete job <job_name> -n openshift-marketplace
$ oc delete job <job_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures pods that try to pull the inaccessible image are not recreated.
Delete the config map:
oc delete configmap <configmap_name> -n openshift-marketplace
$ oc delete configmap <configmap_name> -n openshift-marketplaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
oc get sub,csv,installplan -n <namespace>
$ oc get sub,csv,installplan -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.13.8. Reinstalling Operators after failed uninstallation Copy linkLink copied to clipboard!
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example:
Example Project resource description
...
message: 'Failed to delete all resource types, 1 remaining: Internal error occurred:
error resolving resource'
...
...
message: 'Failed to delete all resource types, 1 remaining: Internal error occurred:
error resolving resource'
...
These types of issues can prevent an Operator from being reinstalled successfully.
Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791, paying careful attention to the cautions and warnings.
The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a previous installation of the Operator is preventing a related namespace from deleting successfully.
Procedure
Check if there are any namespaces related to the Operator that are stuck in "Terminating" state:
oc get namespaces
$ oc get namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator-ns-1 Terminating
operator-ns-1 TerminatingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if there are any CRDs related to the Operator that are still present after the failed uninstallation:
oc get crds
$ oc get crdsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteCRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances.
If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD:
oc delete crd <crd_name>
$ oc delete crd <crd_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs:
The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the
EtcdClusterCRD, you can search for remainingEtcdClusterCRs in a namespace:oc get EtcdCluster -n <namespace_name>
$ oc get EtcdCluster -n <namespace_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can search across all namespaces:
oc get EtcdCluster --all-namespaces
$ oc get EtcdCluster --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there are any remaining CRs that should be removed, delete the instances:
oc delete <cr_name> <cr_instance_name> -n <namespace_name>
$ oc delete <cr_name> <cr_instance_name> -n <namespace_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check that the namespace deletion has successfully resolved:
oc get namespace <namespace_name>
$ oc get namespace <namespace_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support.
- Reinstall the Operator using OperatorHub in the web console.
Verification
Check that the Operator has been reinstalled successfully:
oc get sub,csv,installplan -n <namespace>
$ oc get sub,csv,installplan -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Developing Operators Copy linkLink copied to clipboard!
5.1. About the Operator SDK Copy linkLink copied to clipboard!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.
Why use the Operator SDK?
The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.
The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
- Extensions to cover common Operator use cases
- Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0.
5.1.1. What are Operators? Copy linkLink copied to clipboard!
For an overview about basic Operator concepts and terminology, see Understanding Operators.
5.1.2. Development workflow Copy linkLink copied to clipboard!
The Operator SDK provides the following workflow to develop a new Operator:
- Create an Operator project by using the Operator SDK command-line interface (CLI).
- Define new resource APIs by adding custom resource definitions (CRDs).
- Specify resources to watch by using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 5.1. Operator SDK workflow
At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
5.2. Installing the Operator SDK CLI Copy linkLink copied to clipboard!
The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.
Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0.
5.2.1. Installing the Operator SDK CLI on Linux Copy linkLink copied to clipboard!
You can install the OpenShift SDK CLI tool on Linux.
Prerequisites
- Go v1.19+
-
dockerv17.03+,podmanv1.9.3+, orbuildahv1.7+
Procedure
- Navigate to the OpenShift mirror site.
- From the latest 4.14 directory, download the latest version of the tarball for Linux.
Unpack the archive:
tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz
$ tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the file executable:
chmod +x operator-sdk
$ chmod +x operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the extracted
operator-sdkbinary to a directory that is on yourPATH.TipTo check your
PATH:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv ./operator-sdk /usr/local/bin/operator-sdk
$ sudo mv ./operator-sdk /usr/local/bin/operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you install the Operator SDK CLI, verify that it is available:
operator-sdk version
$ operator-sdk versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator-sdk version: "v1.31.0-ocp", ...
operator-sdk version: "v1.31.0-ocp", ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.2. Installing the Operator SDK CLI on macOS Copy linkLink copied to clipboard!
You can install the OpenShift SDK CLI tool on macOS.
Prerequisites
- Go v1.19+
-
dockerv17.03+,podmanv1.9.3+, orbuildahv1.7+
Procedure
-
For the
amd64andarm64architectures, navigate to the OpenShift mirror site for theamd64architecture and OpenShift mirror site for thearm64architecture respectively. - From the latest 4.14 directory, download the latest version of the tarball for macOS.
Unpack the Operator SDK archive for
amd64architecture by running the following command:tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz
$ tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unpack the Operator SDK archive for
arm64architecture by running the following command:tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz
$ tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the file executable by running the following command:
chmod +x operator-sdk
$ chmod +x operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the extracted
operator-sdkbinary to a directory that is on yourPATHby running the following command:TipCheck your
PATHby running the following command:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv ./operator-sdk /usr/local/bin/operator-sdk
$ sudo mv ./operator-sdk /usr/local/bin/operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you install the Operator SDK CLI, verify that it is available by running the following command::
operator-sdk version
$ operator-sdk versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator-sdk version: "v1.31.0-ocp", ...
operator-sdk version: "v1.31.0-ocp", ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Go-based Operators Copy linkLink copied to clipboard!
5.3.1. Getting started with Operator SDK for Go-based Operators Copy linkLink copied to clipboard!
To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.3.1.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Go 1.21+
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.3.1.2. Creating and deploying Go-based Operators Copy linkLink copied to clipboard!
You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir memcached-operator
$ mkdir memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd memcached-operator
$ cd memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand to initialize the project:operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command uses the Go plugin by default.
Create an API.
Create a simple Memcached API:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete a CR.
Delete a CR by running the following command:
oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1.3. Next steps Copy linkLink copied to clipboard!
- See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator.
5.3.2. Operator SDK tutorial for Go-based Operators Copy linkLink copied to clipboard!
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.
5.3.2.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Go 1.21+
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.3.2.2. Creating a project Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/memcached-operator
$ mkdir -p $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/memcached-operator
$ cd $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Activate support for Go modules:
export GO111MODULE=on
$ export GO111MODULE=onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand to initialize the project:operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
operator-sdk initcommand uses the Go plugin by default.The
operator-sdk initcommand generates ago.modfile to be used with Go modules. The--repoflag is required when creating a project outside of$GOPATH/src/, because generated files require a valid module path.
5.3.2.2.1. PROJECT file Copy linkLink copied to clipboard!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example:
5.3.2.2.2. About the Manager Copy linkLink copied to clipboard!
The main program for the Operator is the main.go file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
5.3.2.2.3. About multi-group APIs Copy linkLink copied to clipboard!
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
operator-sdk edit --multigroup=true
$ operator-sdk edit --multigroup=true
This command updates the PROJECT file, which should look like the following example:
domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ...
domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...
For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly.
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
5.3.2.3. Creating an API and controller Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
cache, version,v1, and kindMemcached:operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached$ operator-sdk create api \ --group=cache \ --version=v1 \ --kind=MemcachedCopy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter
yfor creating both the resource and controller:Create Resource [y/n] y Create Controller [y/n] y
Create Resource [y/n] y Create Controller [y/n] yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go.
5.3.2.3.1. Defining the API Copy linkLink copied to clipboard!
Define the API for the Memcached custom resource (CR).
Procedure
Modify the Go type definitions at
api/v1/memcached_types.goto have the followingspecandstatus:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the generated code for the resource type:
make generate
$ make generateCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipAfter you modify a
*_types.gofile, you must run themake generatecommand to update the generated code for that resource type.The above Makefile target invokes the
controller-genutility to update theapi/v1/zz_generated.deepcopy.gofile. This ensures your API Go type definitions implement theruntime.Objectinterface that all Kind types must implement.
5.3.2.3.2. Generating CRD manifests Copy linkLink copied to clipboard!
After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.
Procedure
Run the following command to generate and update CRD manifests:
make manifests
$ make manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow This Makefile target invokes the
controller-genutility to generate the CRD manifests in theconfig/crd/bases/cache.example.com_memcacheds.yamlfile.
5.3.2.3.2.1. About OpenAPI validation Copy linkLink copied to clipboard!
OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.
Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix.
5.3.2.4. Implementing the controller Copy linkLink copied to clipboard!
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
controllers/memcached_controller.gowith following example implementation:Example 5.1. Example
memcached_controller.goCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example controller runs the following reconciliation logic for each
Memcachedcustom resource (CR):- Create a Memcached deployment if it does not exist.
-
Ensure that the deployment size is the same as specified by the
MemcachedCR spec. -
Update the
MemcachedCR status with the names of thememcachedpods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.3.2.4.1. Resources watched by the controller Copy linkLink copied to clipboard!
The SetupWithManager() function in controllers/memcached_controller.go specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.
NewControllerManagedBy() provides a controller builder that allows various controller configurations.
For(&cachev1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached type, the reconcile loop is sent a reconcile Request argument, which consists of a namespace and name key, for that Memcached object.
Owns(&appsv1.Deployment{}) specifies the Deployment type as the secondary resource to watch. For each Deployment type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached object for which the deployment was created.
5.3.2.4.2. Controller configurations Copy linkLink copied to clipboard!
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
MaxConcurrentReconcilesoption, which defaults to1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Filter watch events using predicates.
-
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the
EnqueueRequestsFromMapFunchandler to transform a watch event into an arbitrary set of reconcile requests.
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
5.3.2.4.3. Reconcile loop Copy linkLink copied to clipboard!
Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument, which is a namespace and name key used to find the primary resource object, Memcached, from the cache:
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
You can set the Result.RequeueAfter to requeue the request after a grace period as well:
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return Result with RequeueAfter set to periodically reconcile a CR.
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
5.3.2.4.4. Permissions and RBAC manifests Copy linkLink copied to clipboard!
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
The ClusterRole object manifest at config/rbac/role.yaml is generated from the previous markers by using the controller-gen utility whenever the make manifests command is run.
5.3.2.5. Enabling proxy support Copy linkLink copied to clipboard!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses HTTP_PROXY as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
controllers/memcached_controller.gofile to include the following:Import the
proxypackage from theoperator-liblibrary:import ( ... "github.com/operator-framework/operator-lib/proxy" )
import ( ... "github.com/operator-framework/operator-lib/proxy" )Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
proxy.ReadProxyVarsFromEnvhelper function to the reconcile loop and append the results to the Operand environments:for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) } ...for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) } ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.6. Running the Operator Copy linkLink copied to clipboard!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.
5.3.2.6.1. Running locally outside the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.6.2. Running as a deployment on the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Run the following command to verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copy linkLink copied to clipboard!
5.3.2.6.3.1. Bundling an Operator Copy linkLink copied to clipboard!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.14+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ -n <namespace> \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundlecommand supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required resources, including RBAC.
5.3.2.7. Creating a custom resource Copy linkLink copied to clipboard!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
MemcachedCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project memcached-operator-system
$ oc project memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
MemcachedCR manifest atconfig/samples/cache_v1_memcached.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/cache_v1_memcached.yaml
$ oc apply -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
MemcachedOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get memcached/memcached-sample -o yaml
$ oc get memcached/memcached-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/cache_v1_memcached.yamlfile to change thespec.sizefield in theMemcachedCR from3to5:oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the CR by running the following command:
oc delete -f config/samples/cache_v1_memcached.yaml
$ oc delete -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3. Project layout for Go-based Operators Copy linkLink copied to clipboard!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.3.3.1. Go-based project layout Copy linkLink copied to clipboard!
Go-based Operator projects, the default type, generated using the operator-sdk init command contain the following files and directories:
| File or directory | Purpose |
|---|---|
|
|
Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the |
|
|
Directory tree that defines the APIs of the CRDs. You must edit the |
|
|
Controller implementations. Edit the |
|
| Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates. |
|
| Targets used to build and deploy your controller. |
|
| Instructions used by a container engine to build your Operator. |
|
| Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment. |
5.3.4. Updating Go-based Operator projects for newer Operator SDK versions Copy linkLink copied to clipboard!
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.3.4.1. Updating Go-based Operator projects for Operator SDK 1.31.0 Copy linkLink copied to clipboard!
The following procedure updates an existing Go-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s Makefile to add the
OPERATOR_SDK_VERSIONfield and set it tov1.31.0-ocp, as shown in the following example:Example Makefile
Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Ansible-based Operators Copy linkLink copied to clipboard!
5.4.1. Getting started with Operator SDK for Ansible-based Operators Copy linkLink copied to clipboard!
The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.4.1.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Ansible 2.15.0
- Ansible Runner 2.3.3+
- Ansible Runner HTTP Event Emitter plugin 1.0.0+
- Python 3.9+
- Python Kubernetes client
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.1.2. Creating and deploying Ansible-based Operators Copy linkLink copied to clipboard!
You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir memcached-operator
$ mkdir memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd memcached-operator
$ cd memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with theansibleplugin to initialize the project:operator-sdk init \ --plugins=ansible \ --domain=example.com$ operator-sdk init \ --plugins=ansible \ --domain=example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an API.
Create a simple Memcached API:
operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Generates an Ansible role for the API.
Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete a CR.
Delete a CR by running the following command:
oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.1.3. Next steps Copy linkLink copied to clipboard!
- See Operator SDK tutorial for Ansible-based Operators for a more in-depth walkthrough on building an Ansible-based Operator.
5.4.2. Operator SDK tutorial for Ansible-based Operators Copy linkLink copied to clipboard!
Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:
- Create a Memcached deployment
-
Ensure that the deployment size is the same as specified by the
Memcachedcustom resource (CR) spec -
Update the
MemcachedCR status using the status writer with the names of thememcachedpods
This process is accomplished by using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.
5.4.2.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Ansible 2.15.0
- Ansible Runner 2.3.3+
- Ansible Runner HTTP Event Emitter plugin 1.0.0+
- Python 3.9+
- Python Kubernetes client
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.2.2. Creating a project Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/memcached-operator
$ mkdir -p $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/memcached-operator
$ cd $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with theansibleplugin to initialize the project:operator-sdk init \ --plugins=ansible \ --domain=example.com$ operator-sdk init \ --plugins=ansible \ --domain=example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.2.1. PROJECT file Copy linkLink copied to clipboard!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Ansible. For example:
5.4.2.3. Creating an API Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a Memcached API.
Procedure
Run the following command to create an API with group
cache, version,v1, and kindMemcached:operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Generates an Ansible role for the API.
After creating the API, your Operator project updates with the following structure:
- Memcached CRD
-
Includes a sample
Memcachedresource - Manager
Program that reconciles the state of the cluster to the desired state by using:
- A reconciler, either an Ansible role or playbook
-
A
watches.yamlfile, which connects theMemcachedresource to thememcachedAnsible role
5.4.2.4. Modifying the manager Copy linkLink copied to clipboard!
Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a Memcached resource is created, updated, or deleted.
Procedure
Update the
roles/memcached/tasks/main.ymlfile with the following structure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This
memcachedrole ensures amemcacheddeployment exist and sets the deployment size.Set default values for variables used in your Ansible role by editing the
roles/memcached/defaults/main.ymlfile:--- # defaults file for Memcached size: 1
--- # defaults file for Memcached size: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Memcachedsample resource in theconfig/samples/cache_v1_memcached.yamlfile with the following structure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.
The names of all variables in the spec field are converted to snake case, meaning lowercase with an underscore, by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.
You can disable this case conversion by setting the snakeCaseParameters option to false in your watches.yaml file. It is recommended that you perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.
5.4.2.5. Enabling proxy support Copy linkLink copied to clipboard!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses HTTP_PROXY as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Add the environment variables to the deployment by updating the
roles/memcached/tasks/main.ymlfile with the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.6. Running the Operator Copy linkLink copied to clipboard!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.4.2.6.1. Running locally outside the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.6.2. Running as a deployment on the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Run the following command to verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copy linkLink copied to clipboard!
5.4.2.6.3.1. Bundling an Operator Copy linkLink copied to clipboard!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.14+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ -n <namespace> \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundlecommand supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required resources, including RBAC.
5.4.2.7. Creating a custom resource Copy linkLink copied to clipboard!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
MemcachedCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project memcached-operator-system
$ oc project memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
MemcachedCR manifest atconfig/samples/cache_v1_memcached.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/cache_v1_memcached.yaml
$ oc apply -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
MemcachedOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get memcached/memcached-sample -o yaml
$ oc get memcached/memcached-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/cache_v1_memcached.yamlfile to change thespec.sizefield in theMemcachedCR from3to5:oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the CR by running the following command:
oc delete -f config/samples/cache_v1_memcached.yaml
$ oc delete -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.3. Project layout for Ansible-based Operators Copy linkLink copied to clipboard!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.4.3.1. Ansible-based project layout Copy linkLink copied to clipboard!
Ansible-based Operator projects generated using the operator-sdk init --plugins ansible command contain the following directories and files:
| File or directory | Purpose |
|---|---|
|
| Dockerfile for building the container image for the Operator. |
|
| Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
|
| YAML file containing metadata information for the Operator. |
|
|
Base CRD files and the |
|
|
Collects all Operator manifests for deployment. Use by the |
|
| Controller manager deployment. |
|
|
|
|
| Role and role binding for leader election and authentication proxy. |
|
| Sample resources created for the CRDs. |
|
| Sample configurations for testing. |
|
| A subdirectory for the playbooks to run. |
|
| Subdirectory for the roles tree to run. |
|
|
Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the |
|
| YAML file containing the Ansible collections and role dependencies to install during a build. |
|
| Molecule scenarios for end-to-end testing of your role and Operator. |
5.4.4. Updating projects for newer Operator SDK versions Copy linkLink copied to clipboard!
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.4.4.1. Updating Ansible-based Operator projects for Operator SDK 1.31.0 Copy linkLink copied to clipboard!
The following procedure updates an existing Ansible-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Make the following changes to your Operator’s Dockerfile:
Replace the
ansible-operator-2.11-previewbase image with theansible-operatorbase image and update the version to 1.31.0, as shown in the following example:Example Dockerfile
FROM quay.io/operator-framework/ansible-operator:v1.31.0
FROM quay.io/operator-framework/ansible-operator:v1.31.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow The update to Ansible 2.15.0 in version 1.30.0 of the Ansible Operator removed the following preinstalled Python modules:
-
ipaddress -
openshift -
jmespath -
cryptography -
oauthlib
If your Operator depends on one of these removed Python modules, update your Dockerfile to install the required modules using the
pip installcommand.-
Edit your Operator project’s Makefile to add the
OPERATOR_SDK_VERSIONfield and set it tov1.31.0-ocp, as shown in the following example:Example Makefile
Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update your
requirements.yamlandrequirements.gofiles to remove thecommunity.kubernetescollection and update theoperator_sdk.utilcollection to version0.5.0, as shown in the following example:Example
requirements.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all instances of the
lintfield from yourmolecule/kind/molecule.ymlandmolecule/default/molecule.ymlfiles, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.5. Ansible support in Operator SDK Copy linkLink copied to clipboard!
5.4.5.1. Custom resource files Copy linkLink copied to clipboard!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the |
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
5.4.5.2. watches.yaml file Copy linkLink copied to clipboard!
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml.
| Field | Description |
|---|---|
|
| Group of CR to watch. |
|
| Version of CR to watch. |
|
| Kind of CR to watch |
|
|
Path to the Ansible role added to the container. For example, if your |
|
|
Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the |
|
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
|
When set to |
Example watches.yaml file
5.4.5.2.1. Advanced options Copy linkLink copied to clipboard!
Advanced features can be enabled by adding them to your watches.yaml file per GVK. They can go below the group, version, kind and playbook or role fields.
Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.
| Feature | YAML key | Description | Annotation for override | Default value |
|---|---|---|---|---|
| Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
| Manage status |
|
Allows the Operator to manage the |
| |
| Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
| Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
| Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example watches.yml file with advanced options
5.4.5.3. Extra variables sent to Ansible Copy linkLink copied to clipboard!
Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.
The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.
For the following CR example:
The structure passed to Ansible as extra variables is:
The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.4.5.4. Ansible Runner directory Copy linkLink copied to clipboard!
Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.
5.4.6. Kubernetes Collection for Ansible Copy linkLink copied to clipboard!
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
5.4.6.1. Installing the Kubernetes Collection for Ansible Copy linkLink copied to clipboard!
You can install the Kubernetes Collection for Ansible on your local workstation.
Procedure
Install Ansible 2.15+:
sudo dnf install ansible
$ sudo dnf install ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Python Kubernetes client package:
pip install kubernetes
$ pip install kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
ansible-galaxy collection install community.kubernetes
$ ansible-galaxy collection install community.kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have already initialized your Operator, you might have a
requirements.ymlfile at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs thecommunity.kubernetescollection as well as theoperator_sdk.utilcollection, which provides modules and plugins for Operator-specific functions.To install the dependent modules from the
requirements.ymlfile:ansible-galaxy collection install -r requirements.yml
$ ansible-galaxy collection install -r requirements.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.6.2. Testing the Kubernetes Collection locally Copy linkLink copied to clipboard!
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Prerequisites
- Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
- Install the Kubernetes Collection for Ansible
Procedure
In your Ansible-based Operator project directory, modify the
roles/<kind>/tasks/main.ymlfile with the Ansible logic that you want. Theroles/<kind>/directory is created when you use the--generate-roleflag while creating an API. The<kind>replaceable matches the kind that you specified for the API.The following example creates and deletes a config map based on the value of a variable named
state:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
roles/<kind>/defaults/main.ymlfile to setstatetopresentby default:--- state: present
--- state: presentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook by creating a
playbook.ymlfile in the top-level of your project directory, and include your<kind>role:--- - hosts: localhost roles: - <kind>--- - hosts: localhost roles: - <kind>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook playbook.yml
$ ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the config map was created:
oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DATA AGE example-config 0 2m1s
NAME DATA AGE example-config 0 2m1sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Rerun the playbook setting
statetoabsent:ansible-playbook playbook.yml --extra-vars state=absent
$ ansible-playbook playbook.yml --extra-vars state=absentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the config map was deleted:
oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.6.3. Next steps Copy linkLink copied to clipboard!
- See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.
5.4.7. Using Ansible inside an Operator Copy linkLink copied to clipboard!
After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml file.
5.4.7.1. Custom resource files Copy linkLink copied to clipboard!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the |
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
5.4.7.2. Testing an Ansible-based Operator locally Copy linkLink copied to clipboard!
You can test the logic inside of an Ansible-based Operator running locally by using the make run command from the top-level directory of your Operator project. The make run Makefile target runs the ansible-operator binary locally, which reads from the watches.yaml file and uses your ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s modules do.
You can customize the roles path by setting the environment variable ANSIBLE_ROLES_PATH or by using the ansible-roles-path flag. If the role is not found in the ANSIBLE_ROLES_PATH value, the Operator looks for it in {{current directory}}/roles.
Prerequisites
- Ansible Runner v2.3.3+
- Ansible Runner HTTP Event Emitter plugin v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
Procedure
Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
make runcommand:make run
$ make runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.
NoteConsider an example
config/samples/<gvk>.yamlCR manifest:apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"
apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the
specfield is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.Create an instance of your CR with the default variable
stateset topresent:oc apply -f config/samples/<gvk>.yaml
$ oc apply -f config/samples/<gvk>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
example-configconfig map was created:oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS AGE example-config Active 3s
NAME STATUS AGE example-config Active 3sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify your
config/samples/<gvk>.yamlfile to set thestatefield toabsent. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes:
oc apply -f config/samples/<gvk>.yaml
$ oc apply -f config/samples/<gvk>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the config map is deleted:
oc get configmap
$ oc get configmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.7.3. Testing an Ansible-based Operator on the cluster Copy linkLink copied to clipboard!
After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is preferred for production use.
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Run the following command to verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.7.4. Ansible logs Copy linkLink copied to clipboard!
Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.
5.4.7.4.1. Viewing Ansible logs Copy linkLink copied to clipboard!
Prerequisites
- Ansible-based Operator running as a deployment on a cluster
Procedure
To view logs from an Ansible-based Operator, run the following command:
oc logs deployment/<project_name>-controller-manager \ -c manager \ -n <namespace>$ oc logs deployment/<project_name>-controller-manager \ -c manager \1 -n <namespace>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.7.4.2. Enabling full Ansible results in logs Copy linkLink copied to clipboard!
You can set the environment variable ANSIBLE_DEBUG_LOGS to True to enable checking the full Ansible result in logs, which can be helpful when debugging.
Procedure
Edit the
config/manager/manager.yamlandconfig/default/manager_auth_proxy_patch.yamlfiles to include the following configuration:containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.7.4.3. Enabling verbose debugging in logs Copy linkLink copied to clipboard!
While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.
Procedure
Add the
ansible.sdk.operatorframework.io/verbosityannotation to your custom resource to enable the verbosity level that you want. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.8. Custom resource status management Copy linkLink copied to clipboard!
5.4.8.1. About custom resource status in Ansible-based Operators Copy linkLink copied to clipboard!
Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
5.4.8.2. Tracking custom resource status manually Copy linkLink copied to clipboard!
You can use the operator_sdk.util collection to modify your Ansible-based Operator to track custom resource (CR) status manually from your application.
Prerequisites
- Ansible-based Operator project created by using the Operator SDK
Procedure
Update the
watches.yamlfile with amanageStatusfield set tofalse:- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false
- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
operator_sdk.util.k8s_statusAnsible module to update the subresource. For example, to update with keytestand valuedata,operator_sdk.utilcan be used as shown:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can declare collections in the
meta/main.ymlfile for the role, which is included for scaffolded Ansible-based Operators:collections: - operator_sdk.util
collections: - operator_sdk.utilCopy to Clipboard Copied! Toggle word wrap Toggle overflow After declaring collections in the role meta, you can invoke the
k8s_statusmodule directly:k8s_status: ... status: key1: value1k8s_status: ... status: key1: value1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Helm-based Operators Copy linkLink copied to clipboard!
5.5.1. Getting started with Operator SDK for Helm-based Operators Copy linkLink copied to clipboard!
The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.
5.5.1.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed -
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.1.2. Creating and deploying Helm-based Operators Copy linkLink copied to clipboard!
You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir nginx-operator
$ mkdir nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd nginx-operator
$ cd nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thehelmplugin to initialize the project:operator-sdk init \ --plugins=helm$ operator-sdk init \ --plugins=helmCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an API.
Create a simple Nginx API:
operator-sdk create api \ --group demo \ --version v1 \ --kind Nginx$ operator-sdk create api \ --group demo \ --version v1 \ --kind NginxCopy to Clipboard Copied! Toggle word wrap Toggle overflow This API uses the built-in Helm chart boilerplate from the
helm createcommand.Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a security context constraint (SCC).
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the
nginx-samplepod:oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-system$ oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-system$ oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete a CR.
Delete a CR by running the following command:
oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system
$ oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.1.3. Next steps Copy linkLink copied to clipboard!
- See Operator SDK tutorial for Helm-based Operators for a more in-depth walkthrough on building a Helm-based Operator.
5.5.2. Operator SDK tutorial for Helm-based Operators Copy linkLink copied to clipboard!
Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:
- Create a Nginx deployment
-
Ensure that the deployment size is the same as specified by the
Nginxcustom resource (CR) spec -
Update the
NginxCR status using the status writer with the names of thenginxpods
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.
5.5.2.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed -
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.2.2. Creating a project Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a project called nginx-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/nginx-operator
$ mkdir -p $HOME/projects/nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/nginx-operator
$ cd $HOME/projects/nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thehelmplugin to initialize the project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the
helmplugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the--helm-chartflag, to initialize a project using an existing Helm chart.The
initcommand creates thenginx-operatorproject specifically for watching a resource with API versionexample.com/v1and kindNginx.-
For Helm-based projects, the
initcommand generates the RBAC rules in theconfig/rbac/role.yamlfile based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.
5.5.2.2.1. Existing Helm charts Copy linkLink copied to clipboard!
Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:
-
--helm-chart -
--helm-chart-repo -
--helm-chart-version
If the --helm-chart flag is specified, the --group, --version, and --kind flags become optional. If left unset, the following default values are used:
| Flag | Value |
|---|---|
|
|
|
|
|
|
|
|
|
|
| Deduced from the specified chart |
If the --helm-chart flag specifies a local chart archive, for example example-chart-1.2.0.tgz, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.
If a custom repository URL is not specified by the --helm-chart-repo flag, the following chart reference formats are supported:
| Format | Description |
|---|---|
|
|
Fetch the Helm chart named |
|
| Fetch the Helm chart archive at the specified URL. |
If a custom repository URL is specified by --helm-chart-repo, the following chart reference format is supported:
| Format | Description |
|---|---|
|
|
Fetch the Helm chart named |
If the --helm-chart-version flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version flag is not used when the chart specified with the --helm-chart flag refers to a specific version, for example when it is a local path or a URL.
For more details and examples, run:
operator-sdk init --plugins helm --help
$ operator-sdk init --plugins helm --help
5.5.2.2.2. PROJECT file Copy linkLink copied to clipboard!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Helm. For example:
5.5.2.3. Understanding the Operator logic Copy linkLink copied to clipboard!
For this example, the nginx-operator project executes the following reconciliation logic for each Nginx custom resource (CR):
- Create an Nginx deployment if it does not exist.
- Create an Nginx service if it does not exist.
- Create an Nginx ingress if it is enabled and does not exist.
-
Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the
NginxCR, for example the replica count, image, and service type.
By default, the nginx-operator project watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:
5.5.2.3.1. Sample Helm chart Copy linkLink copied to clipboard!
When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.
If you are not already familiar with Helm charts, review the Helm developer documentation.
5.5.2.3.2. Modifying the custom resource spec Copy linkLink copied to clipboard!
Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.
You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.
Procedure
The
helm-charts/nginx/values.yamlfile has a value calledreplicaCountset to1by default. To have two Nginx instances in your deployment, your CR spec must containreplicaCount: 2.Edit the
config/samples/demo_v1_nginx.yamlfile to setreplicaCount: 2:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Similarly, the default service port is set to
80. To use8080, edit theconfig/samples/demo_v1_nginx.yamlfile to setspec.port: 8080,which adds the service port override:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command.
5.5.2.4. Enabling proxy support Copy linkLink copied to clipboard!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses HTTP_PROXY as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
watches.yamlfile to include overrides based on an environment variable by adding theoverrideValuesfield:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
proxy.httpvalue in thehelm-charts/nginx/values.yamlfile:... proxy: http: "" https: "" no_proxy: ""
... proxy: http: "" https: "" no_proxy: ""Copy to Clipboard Copied! Toggle word wrap Toggle overflow To make sure the chart template supports using the variables, edit the chart template in the
helm-charts/nginx/templates/deployment.yamlfile to contain the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5. Running the Operator Copy linkLink copied to clipboard!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.2. Running as a deployment on the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Run the following command to verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copy linkLink copied to clipboard!
5.5.2.5.3.1. Bundling an Operator Copy linkLink copied to clipboard!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.14+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ -n <namespace> \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundlecommand supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required resources, including RBAC.
5.5.2.6. Creating a custom resource Copy linkLink copied to clipboard!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Nginx Operator, which provides the
NginxCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project nginx-operator-system
$ oc project nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
NginxCR manifest atconfig/samples/demo_v1_nginx.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the
nginx-samplepod:oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/demo_v1_nginx.yaml
$ oc apply -f config/samples/demo_v1_nginx.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
NginxOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Nginx pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get nginx/nginx-sample -o yaml
$ oc get nginx/nginx-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/demo_v1_nginx.yamlfile to change thespec.sizefield in theNginxCR from3to5:oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=merge$ oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the CR by running the following command:
oc delete -f config/samples/demo_v1_nginx.yaml
$ oc delete -f config/samples/demo_v1_nginx.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.3. Project layout for Helm-based Operators Copy linkLink copied to clipboard!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.5.3.1. Helm-based project layout Copy linkLink copied to clipboard!
Helm-based Operator projects generated using the operator-sdk init --plugins helm command contain the following directories and files:
| File/folders | Purpose |
|---|---|
|
| Kustomize manifests for deploying the Operator on a Kubernetes cluster. |
|
|
Helm chart initialized with the |
|
|
Used to build the Operator image with the |
|
| Group/version/kind (GVK) and Helm chart location. |
|
| Targets used to manage the project. |
|
| YAML file containing metadata information for the Operator. |
5.5.4. Updating Helm-based projects for newer Operator SDK versions Copy linkLink copied to clipboard!
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.5.4.1. Updating Helm-based Operator projects for Operator SDK 1.31.0 Copy linkLink copied to clipboard!
The following procedure updates an existing Helm-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator’s Dockerfile to update the Helm Operator version to 1.31.0, as shown in the following example:
Example Dockerfile
FROM quay.io/operator-framework/helm-operator:v1.31.0
FROM quay.io/operator-framework/helm-operator:v1.31.01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the Helm Operator version from
1.28.0to1.31.0
Edit your Operator project’s Makefile to add the
OPERATOR_SDK_VERSIONfield and set it tov1.31.0-ocp, as shown in the following example:Example Makefile
Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocpCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a custom service account for deployment, define the following role to require a watch operation on your secrets resource, as shown in the following example:
Example
config/rbac/role.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
rulesstanza to create a watch operation for your secrets resource.
5.5.5. Helm support in Operator SDK Copy linkLink copied to clipboard!
5.5.5.1. Helm charts Copy linkLink copied to clipboard!
One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec field of the object is a list of configuration options that are typically described in the Helm values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.
For an example of a simple CR called Tomcat:
The replicaCount value, 2 in this case, is propagated into the template of the chart where the following is used:
{{ .Values.replicaCount }}
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:
oc get Tomcats --all-namespaces
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.
5.5.6. Operator SDK tutorial for Hybrid Helm Operators Copy linkLink copied to clipboard!
The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the Operator maturity model.
The Hybrid Helm Operator enhances the existing Helm-based support’s abilities through Go APIs. With this hybrid approach of Helm and Go, the Operator SDK enables Operator authors to use the following process:
- Generate a default structure for, or scaffold, a Go API in the same project as Helm.
-
Configure the Helm reconciler in the
main.gofile of the project, through the libraries provided by the Hybrid Helm Operator.
The Hybrid Helm Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This tutorial walks through the following process using the Hybrid Helm Operator:
-
Create a
Memcacheddeployment through a Helm chart if it does not exist -
Ensure that the deployment size is the same as specified by
Memcachedcustom resource (CR) spec -
Create a
MemcachedBackupdeployment by using the Go API
5.5.6.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed -
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.6.2. Creating a project Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/github.com/example/memcached-operator
$ mkdir -p $HOME/github.com/example/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/github.com/example/memcached-operator
$ cd $HOME/github.com/example/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand to initialize the project. This example uses a domain ofmy.domainso that all API groups are<group>.my.domain:operator-sdk init \ --plugins=hybrid.helm.sdk.operatorframework.io \ --project-version="3" \ --domain my.domain \ --repo=github.com/example/memcached-operator$ operator-sdk init \ --plugins=hybrid.helm.sdk.operatorframework.io \ --project-version="3" \ --domain my.domain \ --repo=github.com/example/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
initcommand generates the RBAC rules in theconfig/rbac/role.yamlfile based on the resources that would be deployed by the chart’s default manifests. Verify that the rules generated in theconfig/rbac/role.yamlfile meet your Operator’s permission requirements.
Additional resources
- This procedure creates a project structure that is compatible with both Helm and Go APIs. To learn more about the project directory structure, see Project layout.
5.5.6.3. Creating a Helm API Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a Helm API.
Procedure
Run the following command to create a Helm API with group
cache, versionv1, and kindMemcached:operator-sdk create api \ --plugins helm.sdk.operatorframework.io/v1 \ --group cache \ --version v1 \ --kind Memcached$ operator-sdk create api \ --plugins helm.sdk.operatorframework.io/v1 \ --group cache \ --version v1 \ --kind MemcachedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This procedure also configures your Operator project to watch the Memcached resource with API version v1 and scaffolds a boilerplate Helm chart. Instead of creating the project from the boilerplate Helm chart scaffolded by the Operator SDK, you can alternatively use an existing chart from your local file system or remote chart repository.
For more details and examples for creating Helm API based on existing or new charts, run the following command:
operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help
$ operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help
Additional resources
5.5.6.3.1. Operator logic for the Helm API Copy linkLink copied to clipboard!
By default, your scaffolded Operator project watches Memcached resource events as shown in the watches.yaml file and executes Helm releases using the specified chart.
Example 5.2. Example watches.yaml file
Additional resources
- For detailed documentation on customizing the Helm Operator logic through the chart, see Understanding the Operator logic.
5.5.6.3.2. Custom Helm reconciler configurations using provided library APIs Copy linkLink copied to clipboard!
A disadvantage of existing Helm-based Operators is the inability to configure the Helm reconciler, because it is abstracted from users. For a Helm-based Operator to reach the Seamless Upgrades capability (level II and later) that reuses an already existing Helm chart, a hybrid between the Go and Helm Operator types adds value.
The APIs provided in the helm-operator-plugins library allow Operator authors to make the following configurations:
- Customize value mapping based on cluster state
- Execute code in specific events by configuring the reconciler’s event recorder
- Customize the reconciler’s logger
-
Setup
Install,Upgrade, andUninstallannotations to enable Helm’s actions to be configured based on the annotations found in custom resources watched by the reconciler -
Configure the reconciler to run with
PreandPosthooks
The above configurations to the reconciler can be done in the main.go file:
Example main.go file
5.5.6.4. Creating a Go API Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a Go API.
Procedure
Run the following command to create a Go API with group
cache, versionv1, and kindMemcachedBackup:Copy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter
yfor creating both resource and controller:Create Resource [y/n]
$ Create Resource [y/n] y Create Controller [y/n] yCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This procedure generates the MemcachedBackup resource API at api/v1/memcachedbackup_types.go and the controller at controllers/memcachedbackup_controller.go.
5.5.6.4.1. Defining the API Copy linkLink copied to clipboard!
Define the API for the MemcachedBackup custom resource (CR).
Represent this Go API by defining the MemcachedBackup type, which will have a MemcachedBackupSpec.Size field to set the quantity of Memcached backup instances (CRs) to be deployed, and a MemcachedBackupStatus.Nodes field to store a CR’s pod names.
The Node field is used to illustrate an example of a Status field.
Procedure
Define the API for the
MemcachedBackupCR by modifying the Go type definitions in theapi/v1/memcachedbackup_types.gofile to have the followingspecandstatus:Example 5.3. Example
api/v1/memcachedbackup_types.gofileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the generated code for the resource type:
make generate
$ make generateCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipAfter you modify a
*_types.gofile, you must run themake generatecommand to update the generated code for that resource type.After the API is defined with
specandstatusfields and CRD validation markers, generate and update the CRD manifests:make manifests
$ make manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.my.domain_memcachedbackups.yaml file.
5.5.6.4.2. Controller implementation Copy linkLink copied to clipboard!
The controller in this tutorial performs the following actions:
-
Create a
Memcacheddeployment if it does not exist. -
Ensure that the deployment size is the same as specified by the
MemcachedCR spec. -
Update the
MemcachedCR status with the names of thememcachedpods.
For a detailed explanation on how to configure the controller to perform the above mentioned actions, see Implementing the controller in the Operator SDK tutorial for standard Go-based Operators.
5.5.6.4.3. Differences in main.go Copy linkLink copied to clipboard!
For standard Go-based Operators and the Hybrid Helm Operator, the main.go file handles the scaffolding the initialization and running of the Manager program for the Go API. For the Hybrid Helm Operator, however, the main.go file also exposes the logic for loading the watches.yaml file and configuring the Helm reconciler.
Example 5.4. Example main.go file
The manager is initialized with both Helm and Go reconcilers:
Example 5.5. Example Helm and Go reconcilers
5.5.6.4.4. Permissions and RBAC manifests Copy linkLink copied to clipboard!
The controller requires certain role-based access control (RBAC) permissions to interact with the resources it manages. For the Go API, these are specified with RBAC markers, as shown in the Operator SDK tutorial for standard Go-based Operators.
For the Helm API, the permissions are scaffolded by default in roles.yaml. Currently, however, due to a known issue when the Go API is scaffolded, the permissions for the Helm API are overwritten. As a result of this issue, ensure that the permissions defined in roles.yaml match your requirements.
This known issue is being tracked in https://github.com/operator-framework/helm-operator-plugins/issues/142.
The following is an example role.yaml for a Memcached Operator:
Example 5.6. Example Helm and Go reconcilers
Additional resources
5.5.6.5. Running locally outside the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.6. Running as a deployment on the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Run the following command to verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.7. Creating custom resources Copy linkLink copied to clipboard!
After your Operator is installed, you can test it by creating custom resources (CRs) that are now provided on the cluster by the Operator.
Procedure
Change to the namespace where your Operator is installed:
oc project <project_name>-system
$ oc project <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the sample
MemcachedCR manifest at theconfig/samples/cache_v1_memcached.yamlfile by updating thereplicaCountfield to3:Example 5.7. Example
config/samples/cache_v1_memcached.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
MemcachedCR:oc apply -f config/samples/cache_v1_memcached.yaml
$ oc apply -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the sample
MemcachedBackupCR manifest at theconfig/samples/cache_v1_memcachedbackup.yamlfile by updating thesizeto2:Example 5.8. Example
config/samples/cache_v1_memcachedbackup.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
MemcachedBackupCR:oc apply -f config/samples/cache_v1_memcachedbackup.yaml
$ oc apply -f config/samples/cache_v1_memcachedbackup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the count of
memcachedbackuppods is the same as specified in the CR:oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m
NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22mCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
You can update the
specin each of the above CRs, and then apply them again. The controller reconciles again and ensures that the size of the pods is as specified in thespecof the respective CRs. Clean up the resources that have been created as part of this tutorial:
Delete the
Memcachedresource:oc delete -f config/samples/cache_v1_memcached.yaml
$ oc delete -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
MemcachedBackupresource:oc delete -f config/samples/cache_v1_memcachedbackup.yaml
$ oc delete -f config/samples/cache_v1_memcachedbackup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.8. Project layout Copy linkLink copied to clipboard!
The Hybrid Helm Operator scaffolding is customized to be compatible with both Helm and Go APIs.
| File/folders | Purpose |
|---|---|
|
|
Instructions used by a container engine to build your Operator image with the |
|
| Build file with helper targets to help you work with your project. |
|
| YAML file containing metadata information for the Operator. Represents the project’s configuration and is used to track useful information for the CLI and plugins. |
|
|
Contains useful binaries such as the |
|
| Contains configuration files, including all Kustomize manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.
|
|
| Contains the Go API definition. |
|
| Contains the controllers for the Go API. |
|
| Contains utility files, such as the file used to scaffold the license header for your project files. |
|
|
Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the |
|
|
Contains the Helm charts which can be specified using the |
|
| Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches. |
5.5.7. Updating Hybrid Helm-based projects for newer Operator SDK versions Copy linkLink copied to clipboard!
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.5.7.1. Updating Hybrid Helm-based Operator projects for Operator SDK 1.31.0 Copy linkLink copied to clipboard!
The following procedure updates an existing Hybrid Helm-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s Makefile to add the
OPERATOR_SDK_VERSIONfield and set it tov1.31.0-ocp, as shown in the following example:Example Makefile
Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Java-based Operators Copy linkLink copied to clipboard!
5.6.1. Getting started with Operator SDK for Java-based Operators Copy linkLink copied to clipboard!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To demonstrate the basics of setting up and running a Java-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Java-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.6.1.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Java 11+
- Maven 3.6.3+
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.1.2. Creating and deploying Java-based Operators Copy linkLink copied to clipboard!
You can build and deploy a simple Java-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir memcached-operator
$ mkdir memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd memcached-operator
$ cd memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thequarkusplugin to initialize the project:operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an API.
Create a simple Memcached API:
operator-sdk create api \ --plugins quarkus \ --group cache \ --version v1 \ --kind Memcached$ operator-sdk create api \ --plugins quarkus \ --group cache \ --version v1 \ --kind MemcachedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete a CR.
Delete a CR by running the following command:
oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.1.3. Next steps Copy linkLink copied to clipboard!
- See Operator SDK tutorial for Java-based Operators for a more in-depth walkthrough on building a Java-based Operator.
5.6.2. Operator SDK tutorial for Java-based Operators Copy linkLink copied to clipboard!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andjava-operator-sdklibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators.
5.6.2.1. Prerequisites Copy linkLink copied to clipboard!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) 4.14+ installed - Java 11+
- Maven 3.6.3+
-
Logged into an OpenShift Container Platform 4.14 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.2.2. Creating a project Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/memcached-operator
$ mkdir -p $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/memcached-operator
$ cd $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thequarkusplugin to initialize the project:operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.2.1. PROJECT file Copy linkLink copied to clipboard!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Java. For example:
domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: "3"
domain: example.com
layout:
- quarkus.javaoperatorsdk.io/v1-alpha
projectName: memcached-operator
version: "3"
5.6.2.3. Creating an API and controller Copy linkLink copied to clipboard!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API:
operator-sdk create api \ --plugins=quarkus \ --group=cache \ --version=v1 \ --kind=Memcached$ operator-sdk create api \ --plugins=quarkus \1 --group=cache \2 --version=v1 \3 --kind=Memcached4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the
treecommand to view the file structure:tree
$ treeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.3.1. Defining the API Copy linkLink copied to clipboard!
Define the API for the Memcached custom resource (CR).
Procedure
Edit the following files that were generated as part of the
create apiprocess:Update the following attributes in the
MemcachedSpec.javafile to define the desired state of theMemcachedCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the following attributes in the
MemcachedStatus.javafile to define the observed state of theMemcachedCR:NoteThe example below illustrates a Node status field. It is recommended that you use typical status properties in practice.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Memcached.javafile to define the Schema for Memcached APIs that extends to bothMemcachedSpec.javaandMemcachedStatus.javafiles.@Version("v1") @Group("cache.example.com") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}@Version("v1") @Group("cache.example.com") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.3.2. Generating CRD manifests Copy linkLink copied to clipboard!
After the API is defined with MemcachedSpec and MemcachedStatus files, you can generate CRD manifests.
Procedure
Run the following command from the
memcached-operatordirectory to generate the CRD:mvn clean install
$ mvn clean installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the contents of the CRD in the
target/kubernetes/memcacheds.cache.example.com-v1.ymlfile as shown in the following example:cat target/kubernetes/memcacheds.cache.example.com-v1.yaml
$ cat target/kubernetes/memcacheds.cache.example.com-v1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.3.3. Creating a Custom Resource Copy linkLink copied to clipboard!
After generating the CRD manifests, you can create the Custom Resource (CR).
Procedure
Create a Memcached CR called
memcached-sample.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.4. Implementing the controller Copy linkLink copied to clipboard!
After creating a new API and controller, you can implement the controller logic.
Procedure
Append the following dependency to the
pom.xmlfile:<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency><dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For this example, replace the generated controller file
MemcachedReconciler.javawith following example implementation:Example 5.9. Example
MemcachedReconciler.javaCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example controller runs the following reconciliation logic for each
Memcachedcustom resource (CR):- Creates a Memcached deployment if it does not exist.
-
Ensures that the deployment size matches the size specified by the
MemcachedCR spec. -
Updates the
MemcachedCR status with the names of thememcachedpods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.6.2.4.1. Reconcile loop Copy linkLink copied to clipboard!
Every controller has a reconciler object with a
Reconcile()method that implements the reconcile loop. The reconcile loop is passed theDeploymentargument, as shown in the following example:Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();Copy to Clipboard Copied! Toggle word wrap Toggle overflow As shown in the following example, if the
Deploymentisnull, the deployment needs to be created. After you create theDeployment, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value ofUpdateControl.noUpdate(), otherwise, return the value of `UpdateControl.updateStatus(resource):if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }Copy to Clipboard Copied! Toggle word wrap Toggle overflow After getting the
Deployment, get the current and required replicas, as shown in the following example:int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
currentReplicasdoes not match therequiredReplicas, you must update theDeployment, as shown in the following example:if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows how to obtain the list of pods and their names:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.4.2. Defining labelsForMemcached Copy linkLink copied to clipboard!
labelsForMemcached is a utility to return a map of the labels to attach to the resources:
5.6.2.4.3. Define the createMemcachedDeployment Copy linkLink copied to clipboard!
The createMemcachedDeployment method uses the fabric8 DeploymentBuilder class:
5.6.2.5. Running the Operator Copy linkLink copied to clipboard!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.5.1. Running locally outside the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to compile the Operator:
mvn clean install
$ mvn clean installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to install the CRD to the default namespace:
oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file called
rbac.yamlas shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to grant
cluster-adminprivileges to thememcached-quarkus-operator-operatorby applying therbac.yamlfile:oc apply -f rbac.yaml
$ oc apply -f rbac.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to run the Operator:
java -jar target/quarkus-app/quarkus-run.jar
$ java -jar target/quarkus-app/quarkus-run.jarCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
javacommand will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.Apply the
memcached-sample.yamlfile with the following command:kubectl apply -f memcached-sample.yaml
$ kubectl apply -f memcached-sample.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
memcached.cache.example.com/memcached-sample created
memcached.cache.example.com/memcached-sample createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following command to confirm that the pod has started:
oc get all
$ oc get allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.5.2. Running as a deployment on the cluster Copy linkLink copied to clipboard!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to install the CRD to the default namespace:
oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file called
rbac.yamlas shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
rbac.yamlfile will be applied at a later step.Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to grant
cluster-adminprivileges to thememcached-quarkus-operator-operatorby applying therbac.yamlfile created in a previous step:oc apply -f rbac.yaml
$ oc apply -f rbac.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to verify that the Operator is running:
oc get all -n default
$ oc get all -n defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s
NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to apply the
memcached-sample.yamland create thememcached-samplepod:oc apply -f memcached-sample.yaml
$ oc apply -f memcached-sample.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
memcached.cache.example.com/memcached-sample created
memcached.cache.example.com/memcached-sample createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following command to confirm the pods have started:
oc get all
$ oc get allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copy linkLink copied to clipboard!
5.6.2.5.3.1. Bundling an Operator Copy linkLink copied to clipboard!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.14+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ -n <namespace> \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundlecommand supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required resources, including RBAC.
5.6.3. Project layout for Java-based Operators Copy linkLink copied to clipboard!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.6.3.1. Java-based project layout Copy linkLink copied to clipboard!
Java-based Operator projects generated by the operator-sdk init command contain the following files and directories:
| File or directory | Purpose |
|---|---|
|
| File that contains the dependencies required to run the Operator. |
|
|
Directory that contains the files that represent the API. If the domain is |
|
| Java file that defines controller implementations. |
|
| Java file that defines the desired state of the Memcached CR. |
|
| Java file that defines the observed state of the Memcached CR. |
|
| Java file that defines the Schema for Memcached APIs. |
|
| Directory that contains the CRD yaml files. |
5.6.4. Updating projects for newer Operator SDK versions Copy linkLink copied to clipboard!
OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.6.4.1. Updating Java-based Operator projects for Operator SDK 1.31.0 Copy linkLink copied to clipboard!
The following procedure updates an existing Java-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s Makefile to add the
OPERATOR_SDK_VERSIONfield and set it tov1.31.0-ocp, as shown in the following example:Example Makefile
Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7. Defining cluster service versions (CSVs) Copy linkLink copied to clipboard!
A cluster service version (CSV), defined by a ClusterServiceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
5.7.1. How CSV generation works Copy linkLink copied to clipboard!
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.
Typically, the generate kustomize manifests subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle subcommand. However, the Operator SDK provides the make bundle command, which automates several tasks, including running the following subcommands in order:
-
generate kustomize manifests -
generate bundle -
bundle validate
5.7.1.1. Generated files and resources Copy linkLink copied to clipboard!
The make bundle command creates the following files and directories in your Operator project:
-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersion(CSV) object -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
The following resources are typically included in a CSV:
- Role
- Defines Operator permissions within a namespace.
- ClusterRole
- Defines cluster-wide Operator permissions.
- Deployment
- Defines how an Operand of an Operator is run in pods.
- CustomResourceDefinition (CRD)
- Defines custom resources that your Operator reconciles.
- Custom resource examples
- Examples of resources adhering to the spec of a particular CRD.
5.7.1.2. Version management Copy linkLink copied to clipboard!
The --version flag for the generate bundle subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.
By setting the VERSION variable in your Makefile, the --version flag is automatically invoked using that value when the generate bundle subcommand is run by the make bundle command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.
5.7.2. Manually-defined CSV fields Copy linkLink copied to clipboard!
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
| Field | Description |
|---|---|
|
|
A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example |
|
|
The capability level according to the Operator maturity model. Options include |
|
| A public name to identify the Operator. |
|
| A short description of the functionality of the Operator. |
|
| Keywords describing the Operator. |
|
|
Human or organizational entities maintaining the Operator, with a |
|
|
The provider of the Operator (usually an organization), with a |
|
| Key-value pairs to be used by Operator internals. |
|
|
Semantic version of the Operator, for example |
|
|
Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
| Field | Description |
|---|---|
|
| The name of the CSV being replaced by this CSV. |
|
|
URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a |
|
| Selectors by which the Operator can pair resources in a cluster. |
|
|
A base64-encoded icon unique to the Operator, set in a |
|
|
The level of maturity the software has achieved at this version. Options include |
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.
5.7.3. Operator metadata annotations Copy linkLink copied to clipboard!
Operator developers can set certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub or the Red Hat Ecosystem Catalog. Operator metadata annotations are manually defined by setting the metadata.annotations field in the CSV YAML file.
5.7.3.1. Infrastructure features annotations Copy linkLink copied to clipboard!
Annotations in the features.operators.openshift.io group detail the infrastructure features that an Operator might support, specified by setting a "true" or "false" value. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog. These annotations are supported in OpenShift Container Platform 4.10 and later.
The features.operators.openshift.io infrastructure feature annotations deprecate the operators.openshift.io/infrastructure-features annotations used in earlier versions of OpenShift Container Platform. See "Deprecated infrastructure feature annotations" for more information.
| Annotation | Description | Valid values^[1]^ |
|---|---|---|
|
|
Specify whether an Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. The Operator leverages the |
|
|
| Specify whether an Operator accepts the FIPS-140 configuration of the underlying platform and works on nodes that are booted into FIPS mode. In this mode, the Operator and any workloads it manages (operands) are solely calling the Red Hat Enterprise Linux (RHEL) cryptographic library submitted for FIPS-140 validation. |
|
|
|
Specify whether an Operator supports running on a cluster behind a proxy by accepting the standard |
|
|
| Specify whether an Operator implements well-known tunables to modify the TLS cipher suite used by the Operator and, if applicable, any of the workloads it manages (operands). |
|
|
| Specify whether an Operator supports configuration for tokenized authentication with AWS APIs via AWS Secure Token Service (STS) by using the Cloud Credential Operator (CCO). |
|
|
| Specify whether an Operator supports configuration for tokenized authentication with Azure APIs via Azure Managed Identity by using the Cloud Credential Operator (CCO). |
|
|
| Specify whether an Operator supports configuration for tokenized authentication with Google Cloud APIs via GCP Workload Identity Foundation (WIF) by using the Cloud Credential Operator (CCO). |
|
|
| Specify whether an Operator provides a Cloud-Native Network Function (CNF) Kubernetes plugin. |
|
|
| Specify whether an Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
|
|
| Specify whether an Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
|
- Valid values are shown intentionally with double quotes, because Kubernetes annotations must be strings.
Example CSV with infrastructure feature annotations
5.7.3.2. Deprecated infrastructure feature annotations Copy linkLink copied to clipboard!
Starting in OpenShift Container Platform 4.14, the operators.openshift.io/infrastructure-features group of annotations are deprecated by the group of annotations with the features.operators.openshift.io namespace. While you are encouraged to use the newer annotations, both groups are currently accepted when used in parallel.
These annotations detail the infrastructure features that an Operator supports. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog.
| Valid annotation values | Description |
|---|---|
|
| Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator. |
|
| Operator provides a Cloud-native Network Functions (CNF) Kubernetes plugin. |
|
| Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
|
| Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
|
| Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. |
|
|
Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables |
Example CSV with disconnected and proxy-aware support
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
5.7.3.3. Other optional annotations Copy linkLink copied to clipboard!
The following Operator annotations are optional.
| Annotation | Description |
|---|---|
|
| Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
|
Specify a single required custom resource by adding |
|
| Set a suggested namespace where the Operator should be deployed. |
|
|
Set a manifest for a |
|
|
Free-form array for listing any specific subscriptions that are required to use the Operator. For example, |
|
| Hides CRDs in the UI that are not meant for user manipulation. |
Example CSV with an OpenShift Container Platform license requirement
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
Example CSV with a 3scale license requirement
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
annotations:
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
5.7.4. Enabling your Operator for restricted network environments Copy linkLink copied to clipboard!
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Operator requirements for supporting disconnected mode
- Replace hard-coded image references with environment variables.
In the cluster service version (CSV) of your Operator:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
- All dependencies of your Operator must also support running in a disconnected mode.
- Your Operator must not require any off-cluster resources.
Prerequisites
- An Operator project with a CSV. The following procedure uses the Memcached Operator as an example for Go-, Ansible-, and Helm-based projects.
Procedure
Set an environment variable for the additional image references used by the Operator in the
config/manager/manager.yamlfile:Replace hard-coded image references with environment variables in the relevant file for your Operator project type:
For Go-based Operator projects, add the environment variable to the
controllers/memcached_controller.gofile as shown in the following example:Example 5.11. Example
controllers/memcached_controller.gofileCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
os.Getenvfunction returns an empty string if a variable is not set. Set the<related_image_environment_variable>before changing the file.For Ansible-based Operator projects, add the environment variable to the
roles/memcached/tasks/main.ymlfile as shown in the following example:For Helm-based Operator projects, add the
overrideValuesfield to thewatches.yamlfile as shown in the following example:Example 5.13. Example
watches.yamlfileAdd the value of the
overrideValuesfield to thehelm-charts/memchached/values.yamlfile as shown in the following example:Example
helm-charts/memchached/values.yamlfile... relatedImage: ""
... relatedImage: ""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the chart template in the
helm-charts/memcached/templates/deployment.yamlfile as shown in the following example:
Add the
BUNDLE_GEN_FLAGSvariable definition to yourMakefilewith the following changes:Example
MakefileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To update your Operator image to use a digest (SHA) and not a tag, run the
make bundlecommand and setUSE_IMAGE_DIGESTStotrue:make bundle USE_IMAGE_DIGESTS=true
$ make bundle USE_IMAGE_DIGESTS=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
disconnectedannotation, which indicates that the Operator works in a disconnected environment:metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Operators can be filtered in OperatorHub by this infrastructure feature.
5.7.5. Enabling your Operator for multiple architectures and operating systems Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.
If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:
labels:
operatorframework.io/os.linux: supported
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:
labels:
operatorframework.io/arch.amd64: supported
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in the
metadata.labelsof your CSV for each supported architecture and operating system that your Operator supports:labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported operatorframework.io/arch.amd64: supported
labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported1 operatorframework.io/arch.amd64: supported2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.5.1. Architecture and operating system support for Operators Copy linkLink copied to clipboard!
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
| Architecture | String |
|---|---|
| AMD64 |
|
| ARM64 |
|
| IBM Power® |
|
| IBM Z® |
|
| Operating system | String |
|---|---|
| Linux |
|
| z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
5.7.6. Setting a suggested namespace Copy linkLink copied to clipboard!
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
operatorframework.io/suggested-namespaceannotation to your suggested namespace:metadata: annotations: operatorframework.io/suggested-namespace: <namespace>metadata: annotations: operatorframework.io/suggested-namespace: <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set your suggested namespace.
5.7.7. Setting a suggested namespace with default node selector Copy linkLink copied to clipboard!
Some Operators expect to run only on control plane nodes, which can be done by setting a nodeSelector in the Pod spec by the Operator itself.
To avoid getting duplicated and potentially conflicting cluster-wide default nodeSelector, you can set a default node selector on the namespace where the Operator runs. The default node selector will take precedence over the cluster default so the cluster default will not be applied to the pods in the Operators namespace.
When adding the Operator to a cluster using OperatorHub, the web console auto-populates the suggested namespace for the cluster administrator during the installation process. The suggested namespace is created using the namespace manifest in YAML which is included in the cluster service version (CSV).
Procedure
In your CSV, set the
operatorframework.io/suggested-namespace-templatewith a manifest for aNamespaceobject. The following sample is a manifest for an exampleNamespacewith the namespace default node selector specified:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set your suggested namespace.
NoteIf both
suggested-namespaceandsuggested-namespace-templateannotations are present in the CSV,suggested-namespace-templateshould take precedence.
5.7.8. Enabling Operator conditions Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition custom resource (CR), the behavior of OLM changes accordingly.
To support Operator conditions, an Operator must be able to read the OperatorCondition CR created by OLM and have the ability to complete the following tasks:
- Get the specific condition.
- Set the status of a specific condition.
This can be accomplished by using the operator-lib library. An Operator author can provide a controller-runtime client in their Operator for the library to access the OperatorCondition CR owned by the Operator in the cluster.
The library provides a generic Conditions interface, which has the following methods to Get and Set a conditionType in the OperatorCondition CR:
Get-
To get the specific condition, the library uses the
client.Getfunction fromcontroller-runtime, which requires anObjectKeyof typetypes.NamespacedNamepresent inconditionAccessor. Set-
To update the status of the specific condition, the library uses the
client.Updatefunction fromcontroller-runtime. An error occurs if theconditionTypeis not present in the CRD.
The Operator is allowed to modify only the status subresource of the CR. Operators can either delete or update the status.conditions array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream Condition GoDocs.
Operator SDK 1.31.0 supports operator-lib v0.11.0.
Prerequisites
- An Operator project generated using the Operator SDK.
Procedure
To enable Operator conditions in your Operator project:
In the
go.modfile of your Operator project, addoperator-framework/operator-libas a required library:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Write your own constructor in your Operator logic that will result in the following outcomes:
-
Accepts a
controller-runtimeclient. -
Accepts a
conditionType. -
Returns a
Conditioninterface to update or add conditions.
Because OLM currently supports the
Upgradeablecondition, you can create an interface that has methods to access theUpgradeablecondition. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
NewUpgradeableconstructor is further used to create a variablecondof typeCondition. Thecondvariable would in turn haveGetandSetmethods, which can be used for handling the OLMUpgradeablecondition.-
Accepts a
5.7.9. Defining webhooks Copy linkLink copied to clipboard!
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:
- Admission webhooks (validating and mutating)
- Conversion webhooks
Procedure
Add a
webhookdefinitionssection to thespecsection of the CSV of your Operator and include any webhook definitions using atypeofValidatingAdmissionWebhook,MutatingAdmissionWebhook, orConversionWebhook. The following example contains all three types of webhooks:CSV containing webhooks
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.9.1. Webhook considerations for OLM Copy linkLink copied to clipboard!
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
-
The
typefield must be set to eitherValidatingAdmissionWebhook,MutatingAdmissionWebhook, orConversionWebhook, or the CSV will be placed in a failed phase. -
The CSV must contain a deployment whose name is equivalent to the value supplied in the
deploymentNamefield of thewebhookdefinition.
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
5.7.9.1.1. Certificate authority constraints Copy linkLink copied to clipboard!
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
-
The TLS certificate file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.crt. -
The TLS key file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.key.
5.7.9.1.2. Admission webhook rules constraints Copy linkLink copied to clipboard!
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
- Requests that target all groups
-
Requests that target the
operators.coreos.comgroup -
Requests that target the
ValidatingWebhookConfigurationsorMutatingWebhookConfigurationsresources
5.7.9.1.3. Conversion webhook constraints Copy linkLink copied to clipboard!
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
-
CSVs featuring a conversion webhook can only support the
AllNamespacesinstall mode. -
The CRD targeted by the conversion webhook must have its
spec.preserveUnknownFieldsfield set tofalseornil. - The conversion webhook defined in the CSV must target an owned CRD.
- There can only be one conversion webhook on the entire cluster for a given CRD.
5.7.10. Understanding your custom resource definitions (CRDs) Copy linkLink copied to clipboard!
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
5.7.10.1. Owned CRDs Copy linkLink copied to clipboard!
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of your CRD. | Required |
|
| The version of that object API. | Required |
|
| The machine readable name of your CRD. | Required |
|
|
A human readable version of your CRD name, for example | Required |
|
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
|
The API group that this CRD belongs to, for example | Optional |
|
|
Your CRDs own one or more types of Kubernetes objects. These are listed in the It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:
Example owned CRD
5.7.10.2. Required CRDs Copy linkLink copied to clipboard!
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of the CRD you require. | Required |
|
| The version of that object API. | Required |
|
| The Kubernetes object kind. | Required |
|
| A human readable version of the CRD. | Required |
|
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
5.7.10.3. CRD upgrades Copy linkLink copied to clipboard!
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.10.3.1. Adding a new CRD version Copy linkLink copied to clipboard!
Procedure
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the
versionssection of your CSV.For example, if the current CRD has a version
v1alpha1and you want to add a new versionv1beta1and mark it as the new storage version, add a new entry forv1beta1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- New entry.
Ensure the referencing version of the CRD in the
ownedsection of your CSV is updated if the CSV intends to use the new version:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
version.
- Push the updated CRD and CSV to your bundle.
5.7.10.3.2. Deprecating or removing a CRD version Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.
Procedure
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions: - name: v1alpha1 served: false storage: trueversions: - name: v1alpha1 served: false1 storage: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to
false.
Switch the
storageversion to a serving version if the version to be deprecated is currently thestorageversion. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo remove a specific version that is or was the
storageversion from a CRD, that version must be removed from thestoredVersionin the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.- Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions: - name: v1beta1 served: true storage: trueversions: - name: v1beta1 served: true storage: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure the referencing CRD version in the
ownedsection of your CSV is updated accordingly if that version is removed from the CRD.
5.7.10.4. CRD templates Copy linkLink copied to clipboard!
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.
The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.
The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"<operator_namespace>"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"<operator_namespace>"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
5.7.10.5. Hiding internal objects Copy linkLink copied to clipboard!
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.
Procedure
-
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or
specblock of your CR, if applicable to your Operator. Add the
operators.operatorframework.io/internal-objectsannotation to the CSV of your Operator to specify any internal objects to hide in the user interface:Internal object annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set any internal CRDs as an array of strings.
5.7.10.6. Initializing required custom resources Copy linkLink copied to clipboard!
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding operatorframework.io/initialization-resource to the cluster service version (CSV) during Operator installation. You are then prompted to create the custom resource through a template that is provided in the CSV. The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Procedure
Add the
operatorframework.io/initialization-resourceannotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of aStorageClusterresource and provides a full YAML definition:Initialization resource annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.11. Understanding your API services Copy linkLink copied to clipboard!
As with CRDs, there are two types of API services that your Operator may use: owned and required.
5.7.11.1. Owned API services Copy linkLink copied to clipboard!
When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
| Field | Description | Required/optional |
|---|---|---|
|
|
Group that the API service provides, for example | Required |
|
|
Version of the API service, for example | Required |
|
| A kind that the API service is expected to provide. | Required |
|
| The plural name for the API service provided. | Required |
|
|
Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the | Required |
|
|
A human readable version of your API service name, for example | Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
|
| Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| Essentially the same as for owned CRDs. | Optional |
5.7.11.1.1. API service resource creation Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
-
Service pod selectors are copied from the CSV deployment matching the
DeploymentNamefield of the API service description. - A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.11.1.2. API service serving certificates Copy linkLink copied to clipboard!
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.
The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.
5.7.11.2. Required API services Copy linkLink copied to clipboard!
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
| Field | Description | Required/optional |
|---|---|---|
|
|
Group that the API service provides, for example | Required |
|
|
Version of the API service, for example | Required |
|
| A kind that the API service is expected to provide. | Required |
|
|
A human readable version of your API service name, for example | Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
5.8. Working with bundle images Copy linkLink copied to clipboard!
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
5.8.1. Bundling an Operator Copy linkLink copied to clipboard!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.14+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.2. Deploying an Operator with Operator Lifecycle Manager Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ -n <namespace> \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundlecommand supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required resources, including RBAC.
5.8.3. Publishing a catalog containing a bundled Operator Copy linkLink copied to clipboard!
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the opm CLI to facilitate index image creation. Experience with the opm command is not required. For advanced use cases, the opm command can be used directly instead of the Operator SDK.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.14) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Run the following
makecommand in your Operator project directory to build an index image containing your Operator bundle:make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where the
CATALOG_IMGargument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Push the built index image to a repository:
make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can use Operator SDK
makecommands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>$ make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can set the
IMAGE_TAG_BASEfield in yourMakefileto an existing repository:IMAGE_TAG_BASE=quay.io/example/my-operator
IMAGE_TAG_BASE=quay.io/example/my-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can then use the following syntax to build and push images with automatically-generated names, such as
quay.io/example/my-operator-bundle:v0.0.1for the bundle image andquay.io/example/my-operator-catalog:v0.0.1for the index image:make bundle-build bundle-push catalog-build catalog-push
$ make bundle-build bundle-push catalog-build catalog-pushCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
CatalogSourceobject that references the index image you just generated, and then create the object by using theoc applycommand or web console:Example
CatalogSourceYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value of
legacyorrestricted. If the field is not set, the default value islegacy. In a future OpenShift Container Platform release, it is planned that the default value will berestricted. If your catalog cannot run withrestrictedpermissions, it is recommended that you manually set this field tolegacy. - 2
- Set
imageto the image pull spec you used previously with theCATALOG_IMGargument.
Check the catalog source:
oc get catalogsource
$ oc get catalogsourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Install the Operator using your catalog:
Define an
OperatorGroupobject and create it by using theoc applycommand or web console:Example
OperatorGroupYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
Subscriptionobject and create it by using theoc applycommand or web console:Example
SubscriptionYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the installed Operator is running:
Check the Operator group:
oc get og
$ oc get ogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE my-test 4h40m
NAME AGE my-test 4h40mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster service version (CSV):
oc get csv
$ oc get csvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods for the Operator:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager Copy linkLink copied to clipboard!
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The run bundle-upgrade subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.
Prerequisites
-
Operator installed with OLM either by using the
run bundlesubcommand or with traditional OLM installation - A bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
run bundlesubcommand or with traditional OLM installation.NoteIf the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
run bundle-upgradesubcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).For example, you can use the following
run bundlesubcommand for a Memcached Operator by specifying the earlier bundle image:operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1
$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the installed Operator by specifying the bundle image for the later Operator version:
operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the installed Operators:
operator-sdk cleanup memcached-operator
$ operator-sdk cleanup memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions Copy linkLink copied to clipboard!
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.
As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
APIRemovedInNextReleaseInUse- APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse- APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
olm.maxOpenShiftVersionannotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:ImportantYou must use
olm.maxOpenShiftVersionannotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, administrators may uninstall your Operator and can upgrade the cluster version.Example CSV with
olm.maxOpenShiftVersionannotationapiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting
valueto4.9prevents cluster upgrades to OpenShift Container Platform versions later than 4.9 when this bundle is installed on a cluster.
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
NoteThis step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".
Set the
com.redhat.openshift.versionsannotation in your project’sbundle/metadata/annotations.yamlfile:Example
bundle/metadata/annotations.yamlfile with compatible versionscom.redhat.openshift.versions: "v4.7-v4.9"
com.redhat.openshift.versions: "v4.7-v4.9"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to a range or single version.
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper
com.redhat.openshift.versionslabel in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thebundle.Dockerfilefile:Example
bundle.Dockerfilewith compatible versionsLABEL com.redhat.openshift.versions="<versions>"
LABEL com.redhat.openshift.versions="<versions>"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to a range or single version, for example,
v4.7-v4.9. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
5.9. Complying with pod security admission Copy linkLink copied to clipboard!
Pod security admission is an implementation of the Kubernetes pod security standards. Pod security admission restricts the behavior of pods. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
If your Operator project does not require escalated permissions to run, you can ensure your workloads run in namespaces set to the restricted pod security level. If your Operator project requires escalated permissions to run, you must set the following security context configurations:
- The allowed pod security admission level for the Operator’s namespace
- The allowed security context constraints (SCC) for the workload’s service account
For more information, see Understanding and managing pod security admission.
5.9.1. About pod security admission Copy linkLink copied to clipboard!
OpenShift Container Platform includes Kubernetes pod security admission. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits.
You can also configure the pod security admission settings at the namespace level.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default, kube-public, kube-system, openshift, openshift-infra, openshift-node, and other system-created projects that have the openshift.io/run-level label set to 0 or 1. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
5.9.1.1. Pod security admission modes Copy linkLink copied to clipboard!
You can configure the following pod security admission modes for a namespace:
| Mode | Label | Description |
|---|---|---|
|
|
| Rejects a pod from admission if it does not comply with the set profile |
|
|
| Logs audit events if a pod does not comply with the set profile |
|
|
| Displays warnings if a pod does not comply with the set profile |
5.9.1.2. Pod security admission profiles Copy linkLink copied to clipboard!
You can set each of the pod security admission modes to one of the following profiles:
| Profile | Description |
|---|---|
|
| Least restrictive policy; allows for known privilege escalation |
|
| Minimally restrictive policy; prevents known privilege escalations |
|
| Most restrictive policy; follows current pod hardening best practices |
5.9.1.3. Privileged namespaces Copy linkLink copied to clipboard!
The following system namespaces are always set to the privileged pod security admission profile:
-
default -
kube-public -
kube-system
You cannot change the pod security profile for these privileged namespaces.
Example privileged namespace configuration
5.9.2. About pod security admission synchronization Copy linkLink copied to clipboard!
In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace.
The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created.
Namespace labeling is based on consideration of namespace-local service account privileges.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.
5.9.2.1. Pod security admission synchronization namespace exclusions Copy linkLink copied to clipboard!
Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-* prefixed namespaces, but you can enable synchronization on them later.
If a pod security admission label (pod-security.kubernetes.io/<mode>) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label.
If necessary, you can enable synchronization again by using one of the following methods:
- By removing the modified pod security admission label from the namespace
By setting the
security.openshift.io/scc.podSecurityLabelSynclabel totrueIf you force synchronization by adding this label, then any modified pod security admission labels will be overwritten.
5.9.2.1.1. Permanently disabled namespaces Copy linkLink copied to clipboard!
Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled:
-
default -
kube-node-lease -
kube-system -
kube-public -
openshift -
All system-created namespaces that are prefixed with
openshift-, except foropenshift-operators
5.9.2.1.2. Initially disabled namespaces Copy linkLink copied to clipboard!
By default, all namespaces that have an openshift- prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-* namespaces and for the openshift-operators namespace.
You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators.
If an Operator is installed in a user-created openshift-* namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace.
5.9.3. Ensuring Operator workloads run in namespaces set to the restricted pod security level Copy linkLink copied to clipboard!
To ensure your Operator project can run on a wide variety of deployments and environments, configure the Operator’s workloads to run in namespaces set to the restricted pod security level.
You must leave the runAsUser field empty. If your image requires a specific user, it cannot be run under restricted security context constraints (SCC) and restricted pod security enforcement.
Procedure
To configure Operator workloads to run in namespaces set to the
restrictedpod security level, edit your Operator’s namespace definition similar to the following examples:ImportantIt is recommended that you set the seccomp profile in your Operator’s namespace definition. However, setting the seccomp profile is not supported in OpenShift Container Platform 4.10.
For Operator projects that must run in only OpenShift Container Platform 4.11 and later, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By setting the seccomp profile type to
RuntimeDefault, the SCC defaults to the pod security profile of the namespace.
For Operator projects that must also run in OpenShift Container Platform 4.10, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Leaving the seccomp profile type unset ensures your Operator project can run in OpenShift Container Platform 4.10.
5.9.4. Managing pod security admission for Operator workloads that require escalated permissions Copy linkLink copied to clipboard!
If your Operator project requires escalated permissions to run, you must edit your Operator’s cluster service version (CSV).
Procedure
Set the security context configuration to the required permission level in your Operator’s CSV, similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfile with network administrator privilegesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the service account privileges that allow your Operator’s workloads to use the required security context constraints (SCC), similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit your Operator’s CSV description to explain why your Operator project requires escalated permissions similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfile... spec: apiservicedefinitions:{} ... description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.... spec: apiservicedefinitions:{} ... description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10. Token authentication for Operators on cloud providers Copy linkLink copied to clipboard!
Many cloud providers can enable authentication by using account tokens that provide short-term, limited-privilege security credentials.
OpenShift Container Platform includes the Cloud Credential Operator (CCO) to manage cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with any specific permissions required.
Previously, on clusters where the CCO is in manual mode, Operators managed by Operator Lifecycle Manager (OLM) often provided detailed instructions in the OperatorHub for how users could manually provision any required cloud credentials.
Starting in OpenShift Container Platform 4.14, the CCO can detect when it is running on clusters enabled to use short-term credentials on certain cloud providers. It can then semi-automate provisioning certain credentials, provided that the Operator author has enabled their Operator to support the updated CCO.
5.10.1. CCO-based workflow for OLM-managed Operators with AWS STS Copy linkLink copied to clipboard!
When an OpenShift Container Platform cluster running on AWS is in Security Token Service (STS) mode, it means the cluster is utilizing features of AWS and OpenShift Container Platform to use IAM roles at an application level. STS enables applications to provide a JSON Web Token (JWT) that can assume an IAM role.
The JWT includes an Amazon Resource Name (ARN) for the sts:AssumeRoleWithWebIdentity IAM action to allow temporarily-granted permission for the service account. The JWT contains the signing keys for the ProjectedServiceAccountToken that AWS IAM can validate. The service account token itself, which is signed, is used as the JWT required for assuming the AWS role.
The Cloud Credential Operator (CCO) is a cluster Operator installed by default in OpenShift Container Platform clusters running on cloud providers. For the purposes of STS, the CCO provides the following functions:
- Detects when it is running on an STS-enabled cluster
-
Checks for the presence of fields in the
CredentialsRequestobject that provide the required information for granting Operators access to AWS resources
The CCO performs this detection even when in manual mode. When properly configured, the CCO projects a Secret object with the required access information into the Operator namespace.
Starting in OpenShift Container Platform 4.14, the CCO can semi-automate this task through an expanded use of CredentialsRequest objects, which can request the creation of Secrets that contain the information required for STS workflows. Users can provide a role ARN when installing the Operator from either the web console or CLI.
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
As an Operator author preparing an Operator for use alongside the updated CCO in OpenShift Container Platform 4.14, you should instruct users and add code to handle the divergence from earlier CCO versions, in addition to handling STS token authentication (if your Operator is not already STS-enabled). The recommended method is to provide a CredentialsRequest object with correctly filled STS-related fields and let the CCO create the Secret for you.
If you plan to support OpenShift Container Platform clusters earlier than version 4.14, consider providing users with instructions on how to manually create a secret with the STS-enabling information by using the CCO utility (ccoctl). Earlier CCO versions are unaware of STS mode on the cluster and cannot create secrets for you.
Your code should check for secrets that never appear and warn users to follow the fallback instructions you have provided. For more information, see the "Alternative method" subsection.
5.10.1.1. Enabling Operators to support CCO-based workflows with AWS STS Copy linkLink copied to clipboard!
As an Operator author designing your project to run on Operator Lifecycle Manager (OLM), you can enable your Operator to authenticate against AWS on STS-enabled OpenShift Container Platform clusters by customizing your project to support the Cloud Credential Operator (CCO).
With this method, the Operator is responsible for creating the CredentialsRequest object, which means the Operator requires RBAC permission to create these objects. Then, the Operator must be able to read the resulting Secret object.
By default, pods related to the Operator deployment mount a serviceAccountToken volume so that the service account token can be referenced in the resulting Secret object.
Prerequisities
- OpenShift Container Platform 4.14 or later
- Cluster in STS mode
- OLM-based Operator project
Procedure
Update your Operator project’s
ClusterServiceVersion(CSV) object:Ensure your Operator has RBAC permission to create
CredentialsRequestsobjects:Example 5.15. Example
clusterPermissionslistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following annotation to claim support for this method of CCO-based workflow with AWS STS:
# ... metadata: annotations: features.operators.openshift.io/token-auth-aws: "true"
# ... metadata: annotations: features.operators.openshift.io/token-auth-aws: "true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update your Operator project code:
Get the role ARN from the environment variable set on the pod by the
Subscriptionobject. For example:// Get ENV var roleARN := os.Getenv("ROLEARN") setupLog.Info("getting role ARN", "role ARN = ", roleARN) webIdentityTokenPath := "/var/run/secrets/openshift/serviceaccount/token"// Get ENV var roleARN := os.Getenv("ROLEARN") setupLog.Info("getting role ARN", "role ARN = ", roleARN) webIdentityTokenPath := "/var/run/secrets/openshift/serviceaccount/token"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure you have a
CredentialsRequestobject ready to be patched and applied. For example:Example 5.16. Example
CredentialsRequestobject creationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, if you are starting from a
CredentialsRequestobject in YAML form (for example, as part of your Operator project code), you can handle it differently:Example 5.17. Example
CredentialsRequestobject creation in YAML formCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAdding a
CredentialsRequestobject to the Operator bundle is not currently supported.Add the role ARN and web identity token path to the credentials request and apply it during Operator initialization:
Example 5.18. Example applying
CredentialsRequestobject during Operator initializationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure your Operator can wait for a
Secretobject to show up from the CCO, as shown in the following example, which is called along with the other items you are reconciling in your Operator:Example 5.19. Example wait for
SecretobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
timeoutvalue is based on an estimate of how fast the CCO might detect an addedCredentialsRequestobject and generate aSecretobject. You might consider lowering the time or creating custom feedback for cluster administrators that could be wondering why the Operator is not yet accessing the cloud resources.
Set up the AWS configuration by reading the secret created by the CCO from the credentials request and creating the AWS config file containing the data from that secret:
Example 5.20. Example AWS configuration creation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe secret is assumed to exist, but your Operator code should wait and retry when using this secret to give time to the CCO to create the secret.
Additionally, the wait period should eventually time out and warn users that the OpenShift Container Platform cluster version, and therefore the CCO, might be an earlier version that does not support the
CredentialsRequestobject workflow with STS detection. In such cases, instruct users that they must add a secret by using another method.Configure the AWS SDK session, for example:
Example 5.21. Example AWS SDK session configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10.1.2. Role specification Copy linkLink copied to clipboard!
The Operator description should contain the specifics of the role required to be created before installation, ideally in the form of a script that the administrator can run. For example:
Example 5.22. Example role creation script
5.10.1.3. Troubleshooting Copy linkLink copied to clipboard!
5.10.1.3.1. Authentication failure Copy linkLink copied to clipboard!
If authentication was not successful, ensure you can assume the role with web identity by using the token provided to the Operator.
Procedure
Extract the token from the pod:
oc exec operator-pod -n <namespace_name> \ -- cat /var/run/secrets/openshift/serviceaccount/token$ oc exec operator-pod -n <namespace_name> \ -- cat /var/run/secrets/openshift/serviceaccount/tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the role ARN from the pod:
oc exec operator-pod -n <namespace_name> \ -- cat /<path>/<to>/<secret_name>$ oc exec operator-pod -n <namespace_name> \ -- cat /<path>/<to>/<secret_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Do not use root for the path.
Try assuming the role with the web identity token:
aws sts assume-role-with-web-identity \ --role-arn $ROLEARN \ --role-session-name <session_name> \ --web-identity-token $TOKEN$ aws sts assume-role-with-web-identity \ --role-arn $ROLEARN \ --role-session-name <session_name> \ --web-identity-token $TOKENCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10.1.3.2. Secret not mounting correctly Copy linkLink copied to clipboard!
Pods that run as non-root users cannot write to the /root directory where the AWS shared credentials file is expected to exist by default. If the secret is not mounting correctly to the AWS credentials file path, consider mounting the secret to a different location and enabling the shared credentials file option in the AWS SDK.
5.10.1.4. Alternative method Copy linkLink copied to clipboard!
As an alternative method for Operator authors, you can indicate that the user is responsible for creating the CredentialsRequest object for the Cloud Credential Operator (CCO) before installing the Operator.
The Operator instructions must indicate the following to users:
-
Provide a YAML version of a
CredentialsRequestobject, either by providing the YAML inline in the instructions or pointing users to a download location -
Instruct the user to create the
CredentialsRequestobject
In OpenShift Container Platform 4.14 and later, after the CredentialsRequest object appears on the cluster with the appropriate STS information added, the Operator can then read the CCO-generated Secret or mount it, having defined the mount in the cluster service version (CSV).
For earlier versions of OpenShift Container Platform, the Operator instructions must also indicate the following to users:
-
Use the CCO utility (
ccoctl) to generate theSecretYAML object from theCredentialsRequestobject -
Apply the
Secretobject to the cluster in the appropriate namespace
The Operator still must be able to consume the resulting secret to communicate with cloud APIs. Because in this case the secret is created by the user before the Operator is installed, the Operator can do either of the following:
-
Define an explicit mount in the
Deploymentobject within the CSV -
Programmatically read the
Secretobject from the API server, as shown in the recommended "Enabling Operators to support CCO-based workflows with AWS STS" method
5.11. Validating Operators using the scorecard tool Copy linkLink copied to clipboard!
As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:
- Validate that your Operator project is free of syntax errors and packaged correctly
- Review suggestions about ways you can improve your Operator
5.11.1. About the scorecard tool Copy linkLink copied to clipboard!
While the Operator SDK bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure, you can use the scorecard command to run tests on your Operator based on a configuration file and test images. These tests are implemented within test images that are configured and constructed to be executed by the scorecard.
The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OpenShift Container Platform. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.
Scorecard workflow
- Create all resources required by any related custom resources (CRs) and the Operator
- Create a proxy container in the deployment of the Operator to record calls to the API server and run tests
- Examine parameters in the CRs
The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.
scorecard command syntax
operator-sdk scorecard <bundle_dir_or_image> [flags]
$ operator-sdk scorecard <bundle_dir_or_image> [flags]
The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.
For further information about the flags, run:
operator-sdk scorecard -h
$ operator-sdk scorecard -h
5.11.2. Scorecard configuration Copy linkLink copied to clipboard!
The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named config.yaml, which is generated by the make bundle command, located in your bundle/ directory:
./bundle
...
└── tests
└── scorecard
└── config.yaml
./bundle
...
└── tests
└── scorecard
└── config.yaml
Example scorecard configuration file
The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:
| Configuration field | Description |
|---|---|
|
| Test container image name that implements a test |
|
| Command and arguments that are invoked in the test image to execute a test |
|
| Scorecard-defined or custom labels that select which tests to run |
5.11.3. Built-in scorecard tests Copy linkLink copied to clipboard!
The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.
| Test | Description | Short name |
|---|---|---|
| Spec Block Exists |
This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a |
|
| Test | Description | Short name |
|---|---|---|
| Bundle Validation | This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
| Provided APIs Have Validation |
This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each |
|
| Owned CRDs Have Resources Listed |
This test makes sure that the CRDs for each CR provided via the |
|
| Spec Fields With Descriptors |
This test verifies that every field in the CRs |
|
| Status Fields With Descriptors |
This test verifies that every field in the CRs |
|
5.11.4. Running the scorecard tool Copy linkLink copied to clipboard!
A default set of Kustomize files are generated by the Operator SDK after running the init command. The default bundle/tests/scorecard/config.yaml file that is generated can be immediately used to run the scorecard tool against your Operator, or you can modify this file to your test specifications.
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
Generate or regenerate your bundle manifests and metadata for your Operator:
make bundle
$ make bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command automatically adds scorecard annotations to your bundle metadata, which is used by the
scorecardcommand to run tests.Run the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:
operator-sdk scorecard <bundle_dir_or_image>
$ operator-sdk scorecard <bundle_dir_or_image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.11.5. Scorecard output Copy linkLink copied to clipboard!
The --output flag for the scorecard command specifies the scorecard results output format: either text or json.
Example 5.23. Example JSON output snippet
Example 5.24. Example text output snippet
The output format spec matches the Test type layout.
5.11.6. Selecting tests Copy linkLink copied to clipboard!
Scorecard tests are selected by setting the --selector CLI flag to a set of label strings. If a selector flag is not supplied, then all of the tests within the scorecard configuration file are run.
Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.
Procedure
To select a single test, for example
basic-check-spec-test, specify the test by using the--selectorflag:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-test$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow To select a suite of tests, for example
olm, specify a label that is used by all of the OLM tests:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olm$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olmCopy to Clipboard Copied! Toggle word wrap Toggle overflow To select multiple tests, specify the test names by using the
selectorflag using the following syntax:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.11.7. Enabling parallel testing Copy linkLink copied to clipboard!
As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable parallel setting.
By default, or when a stage explicitly sets parallel to false, tests in a stage are run sequentially in the order they are defined in the configuration file. Running tests one at a time is helpful to guarantee that no two tests interact and conflict with each other.
However, if tests are designed to be fully isolated, they can be parallelized.
Procedure
To run a set of isolated tests in parallel, include them in the same stage and set
paralleltotrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables parallel testing
All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.
5.11.8. Custom scorecard tests Copy linkLink copied to clipboard!
The scorecard tool can run custom tests that follow these mandated conventions:
- Tests are implemented within a container image
- Tests accept an entrypoint which include a command and arguments
-
Tests produce
v1alpha3scorecard output in JSON format with no extraneous logging in the test output -
Tests can obtain the bundle contents at a shared mount point of
/bundle - Tests can access the Kubernetes API using an in-cluster client connection
Writing custom tests in other programming languages is possible if the test image follows the above guidelines.
The following example shows of a custom test image written in Go:
Example 5.25. Example custom scorecard test
5.12. Validating Operator bundles Copy linkLink copied to clipboard!
As an Operator author, you can run the bundle validate command in the Operator SDK to validate the content and format of an Operator bundle. You can run the command on a remote Operator bundle image or a local Operator bundle directory.
5.12.1. About the bundle validate command Copy linkLink copied to clipboard!
While the Operator SDK scorecard command can run tests on your Operator based on a configuration file and test images, the bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure.
bundle validate command syntax
operator-sdk bundle validate <bundle_dir_or_image> <flags>
$ operator-sdk bundle validate <bundle_dir_or_image> <flags>
The bundle validate command runs automatically when you build your bundle using the make bundle command.
Bundle images are pulled from a remote registry and built locally before they are validated. Local bundle directories must contain Operator metadata and manifests. The bundle metadata and manifests must have a structure similar to the following bundle layout:
Example bundle layout
Bundle tests pass validation and finish with an exit code of 0 if no errors are detected.
Example output
INFO[0000] All validation tests have completed successfully
INFO[0000] All validation tests have completed successfully
Tests fail validation and finish with an exit code of 1 if errors are detected.
Example output
ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD "cache.example.com/v1alpha1, Kind=Memcached" is present in bundle "" but not defined in CSV
ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD "cache.example.com/v1alpha1, Kind=Memcached" is present in bundle "" but not defined in CSV
Bundle tests that result in warnings can still pass validation with an exit code of 0 as long as no errors are detected. Tests only fail on errors.
Example output
WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully
WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found
INFO[0000] All validation tests have completed successfully
For further information about the bundle validate subcommand, run:
operator-sdk bundle validate -h
$ operator-sdk bundle validate -h
5.12.2. Built-in bundle validate tests Copy linkLink copied to clipboard!
The Operator SDK ships with pre-defined validators arranged into suites. If you run the bundle validate command without specifying a validator, the default test runs. The default test verifies that a bundle adheres to the specifications defined by the Operator Framework community. For more information, see "Bundle format".
You can run optional validators to test for issues such as OperatorHub compatibility or deprecated Kubernetes APIs. Optional validators always run in addition to the default test.
bundle validate command syntax for optional test suites
operator-sdk bundle validate <bundle_dir_or_image>
$ operator-sdk bundle validate <bundle_dir_or_image>
--select-optional <test_label>
| Name | Description | Label |
|---|---|---|
| Operator Framework | This validator tests an Operator bundle against the entire suite of validators provided by the Operator Framework. |
|
| OperatorHub | This validator tests an Operator bundle for compatibility with OperatorHub. |
|
| Good Practices | This validator tests whether an Operator bundle complies with good practices as defined by the Operator Framework. It checks for issues, such as an empty CRD description or unsupported Operator Lifecycle Manager (OLM) resources. |
|
5.12.3. Running the bundle validate command Copy linkLink copied to clipboard!
The default validator runs a test every time you enter the bundle validate command. You can run optional validators using the --select-optional flag. Optional validators run tests in addition to the default test.
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
If you want to run the default validator against a local bundle directory, enter the following command from your Operator project directory:
operator-sdk bundle validate ./bundle
$ operator-sdk bundle validate ./bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to run the default validator against a remote Operator bundle image, enter the following command:
operator-sdk bundle validate \ <bundle_registry>/<bundle_image_name>:<tag>
$ operator-sdk bundle validate \ <bundle_registry>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <bundle_registry>
-
Specifies the registry where the bundle is hosted, such as
quay.io/example. - <bundle_image_name>
-
Specifies the name of the bundle image, such as
memcached-operator. - <tag>
Specifies the tag of the bundle image, such as
v1.31.0.NoteIf you want to validate an Operator bundle image, you must host your image in a remote registry. The Operator SDK pulls the image and builds it locally before running tests. The
bundle validatecommand does not support testing local bundle images.
If you want to run an additional validator against an Operator bundle, enter the following command:
operator-sdk bundle validate \ <bundle_dir_or_image> \ --select-optional <test_label>
$ operator-sdk bundle validate \ <bundle_dir_or_image> \ --select-optional <test_label>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <bundle_dir_or_image>
-
Specifies the local bundle directory or remote bundle image, such as
~/projects/memcachedorquay.io/example/memcached-operator:v1.31.0. - <test_label>
Specifies the name of the validator you want to run, such as
name=good-practices.Example output
ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD "k8sevents.k8s.k8sevent.com" has an empty description
ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD "k8sevents.k8s.k8sevent.com" has an empty descriptionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.12.4. Validating your Operator’s multi-platform readiness Copy linkLink copied to clipboard!
You can validate your Operator’s multi-platform readiness by running the bundle validate command. The command verifies that your Operator project meets the following conditions:
- Your Operator’s manager image supports the platforms labeled in the cluster service version (CSV) file.
- Your Operator’s CSV has labels for the supported platforms for Operator Lifecycle Manager (OLM) and OperatorHub.
Procedure
Run the following command to validate your Operator project for multiple architecture readiness:
operator-sdk bundle validate ./bundle \ --select-optional name=multiarch
$ operator-sdk bundle validate ./bundle \ --select-optional name=multiarchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example validation message
INFO[0020] All validation tests have completed successfully
INFO[0020] All validation tests have completed successfullyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example error message for missing CSV labels in the manager image
ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]
ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example error message for missing OperatorHub flags
WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): ["amd64" "arm64" "ppc64le" "s390x"]. Be aware that your Operator manager image ["quay.io/example-org/test-operator:v1alpha1"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures
WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): ["amd64" "arm64" "ppc64le" "s390x"]. Be aware that your Operator manager image ["quay.io/example-org/test-operator:v1alpha1"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architecturesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.13. High-availability or single-node cluster detection and support Copy linkLink copied to clipboard!
An OpenShift Container Platform cluster can be configured in high-availability (HA) mode, which uses multiple nodes, or in non-HA mode, which uses a single node. A single-node cluster, also known as single-node OpenShift, is likely to have more conservative resource constraints. Therefore, it is important that Operators installed on a single-node cluster can adjust accordingly and still run well.
By accessing the cluster high-availability mode API provided in OpenShift Container Platform, Operator authors can use the Operator SDK to enable their Operator to detect a cluster’s infrastructure topology, either HA or non-HA mode. Custom Operator logic can be developed that uses the detected cluster topology to automatically switch the resource requirements, both for the Operator and for any Operands or workloads it manages, to a profile that best fits the topology.
5.13.1. About the cluster high-availability mode API Copy linkLink copied to clipboard!
OpenShift Container Platform provides a cluster high-availability mode API that can be used by Operators to help detect infrastructure topology. The Infrastructure API holds cluster-wide information regarding infrastructure. Operators managed by Operator Lifecycle Manager (OLM) can use the Infrastructure API if they need to configure an Operand or managed workload differently based on the high-availability mode.
In the Infrastructure API, the infrastructureTopology status expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master. The controlPlaneTopology status expresses the expectations for Operands that normally run on control plane nodes.
The default setting for either status is HighlyAvailable, which represents the behavior Operators have in multiple node clusters. The SingleReplica setting is used in single-node clusters, also known as single-node OpenShift, and indicates that Operators should not configure their Operands for high-availability operation.
The OpenShift Container Platform installer sets the controlPlaneTopology and infrastructureTopology status fields based on the replica counts for the cluster when it is created, according to the following rules:
-
When the control plane replica count is less than 3, the
controlPlaneTopologystatus is set toSingleReplica. Otherwise, it is set toHighlyAvailable. -
When the worker replica count is 0, the control plane nodes are also configured as workers. Therefore, the
infrastructureTopologystatus will be the same as thecontrolPlaneTopologystatus. -
When the worker replica count is 1, the
infrastructureTopologyis set toSingleReplica. Otherwise, it is set toHighlyAvailable.
5.13.2. Example API usage in Operator projects Copy linkLink copied to clipboard!
As an Operator author, you can update your Operator project to access the Infrastructure API by using normal Kubernetes constructs and the controller-runtime library, as shown in the following examples:
controller-runtime library example
Kubernetes constructs example
5.14. Configuring built-in monitoring with Prometheus Copy linkLink copied to clipboard!
This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for authors of Go-based and Ansible-based Operators.
5.14.1. Prometheus Operator support Copy linkLink copied to clipboard!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
5.14.2. Exposing custom metrics for Go-based Operators Copy linkLink copied to clipboard!
As an Operator author, you can publish custom metrics by using the global Prometheus registry from the controller-runtime/pkg/metrics library.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters
Procedure
In your Operator SDK project, uncomment the following line in the
config/default/kustomization.yamlfile:../prometheus
../prometheusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom controller class to publish additional metrics from the Operator. The following example declares the
widgetsandwidgetFailurescollectors as global variables, and then registers them with theinit()function in the controller’s package:Example 5.26.
controllers/memcached_controller_test_metrics.gofileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record to these collectors from any part of the reconcile loop in the
maincontroller class, which determines the business logic for the metric:Example 5.27.
controllers/memcached_controller.gofileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build and push the Operator:
make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create role and role binding definitions to allow the service monitor of the Operator to be scraped by the Prometheus instance of the OpenShift Container Platform cluster.
Roles must be assigned so that service accounts have the permissions to scrape the metrics of the namespace:
Example 5.28.
config/prometheus/role.yamlroleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.29.
config/prometheus/rolebinding.yamlrole bindingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the roles and role bindings for the deployed Operator:
oc apply -f config/prometheus/role.yaml
$ oc apply -f config/prometheus/role.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f config/prometheus/rolebinding.yaml
$ oc apply -f config/prometheus/rolebinding.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the labels for the namespace that you want to scrape, which enables OpenShift cluster monitoring for that namespace:
oc label namespace <operator_namespace> openshift.io/cluster-monitoring="true"
$ oc label namespace <operator_namespace> openshift.io/cluster-monitoring="true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Query and view the metrics in the OpenShift Container Platform web console. You can use the names that were set in the custom controller class, for example
widgets_totalandwidget_failures_total.
5.14.3. Exposing custom metrics for Ansible-based Operators Copy linkLink copied to clipboard!
As an Operator author creating Ansible-based Operators, you can use the Operator SDK’s osdk_metrics module to expose custom Operator and Operand metrics, emit events, and support logging.
Prerequisites
- Ansible-based Operator generated using the Operator SDK
- Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters
Procedure
Generate an Ansible-based Operator. This example uses a
testmetrics.comdomain:operator-sdk init \ --plugins=ansible \ --domain=testmetrics.com$ operator-sdk init \ --plugins=ansible \ --domain=testmetrics.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
metricsAPI. This example uses akindnamedTestmetrics:operator-sdk create api \ --group metrics \ --version v1 \ --kind Testmetrics \ --generate-role$ operator-sdk create api \ --group metrics \ --version v1 \ --kind Testmetrics \ --generate-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
roles/testmetrics/tasks/main.ymlfile and use theosdk_metricsmodule to create custom metrics for your Operator project:Example 5.30. Example
roles/testmetrics/tasks/main.ymlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run your Operator on a cluster. For example, to use the "run as a deployment" method:
Build the Operator image and push it to a registry:
make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Operator on a cluster:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Testmetricscustom resource (CR):Define the CR spec:
Example 5.31. Example
config/samples/metrics_v1_testmetrics.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the object:
oc create -f config/samples/metrics_v1_testmetrics.yaml
$ oc create -f config/samples/metrics_v1_testmetrics.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Get the pod details:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m
NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the endpoint details:
oc get ep
$ oc get epCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m
NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Request a custom metrics token:
token=`oc create token prometheus-k8s -n openshift-monitoring`
$ token=`oc create token prometheus-k8s -n openshift-monitoring`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the metrics values:
Check the
my_counter_metricvalue:oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep my_counterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2
HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
my_gauge_metricvalue:oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep gaugeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
HELP my_gauge_metric Create my gauge and set it to 2.
HELP my_gauge_metric Create my gauge and set it to 2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
my_histogram_metricandmy_summary_metricvalues:oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep ObserveCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary
HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summaryCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.15. Configuring leader election Copy linkLink copied to clipboard!
During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own trade-off:
- Leader-for-life
-
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, you can specify
node.kubernetes.io/unreachableandnode.kubernetes.io/not-readytolerations on the leader pod and use thetolerationSecondsvalue to dictate how long it takes for the leader pod to be deleted from the node and step down. These tolerations are added to the pod by default on admission with atolerationSecondsvalue of 5 minutes. See the Leader-for-life Go documentation for more. - Leader-with-lease
- The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.
By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.
5.15.1. Operator leader election examples Copy linkLink copied to clipboard!
The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.
5.15.1.1. Leader-for-life election Copy linkLink copied to clipboard!
With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the config map named memcached-operator-lock:
If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the name of the Operator.
5.15.1.2. Leader-with-lease election Copy linkLink copied to clipboard!
The Leader-with-lease implementation can be enabled using the Manager Options for leader election:
When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator to create the config map for leader election. You can override this namespace by setting the LeaderElectionNamespace option for the Manager.
5.16. Configuring Operator projects for multi-platform support Copy linkLink copied to clipboard!
Operator projects that support multiple architectures and operating systems, or platforms, can run on more Kubernetes and OpenShift Container Platform clusters than Operator projects that support only a single platform. Example architectures include amd64, arm64, ppc64le, and s390x. Example operating systems include Linux and Windows.
Perform the following actions to ensure your Operator project can run on multiple OpenShift Container Platform platforms:
- Build a manifest list that specifies the platforms that your Operator supports.
- Set your Operator’s node affinity to support multi-architecture compute machines.
5.16.1. Building a manifest list of the platforms your Operator supports Copy linkLink copied to clipboard!
You can use the make docker-buildx command to build a manifest list of the platforms supported by your Operator and operands. A manifest list references specific image manifests for one or more architectures. An image manifest specifies the platforms that an image supports.
For more information, see OpenContainers Image Index Spec or Image Manifest v2, Schema 2.
If your Operator project deploys an application or other workload resources, the following procedure assumes the application’s multi-platform images are built during the application release process.
Prerequisites
- An Operator project built using the Operator SDK version 1.31.0 or later
- Docker installed
Procedure
Inspect the image manifests of your Operator and operands to find which platforms your Operator project can support. Run the following command to inspect an image manifest:
docker manifest inspect <image_manifest>
$ docker manifest inspect <image_manifest>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies an image manifest, such as
redhat/ubi9:latest.
The platforms that your Operator and operands mutually support determine the platform compatibility of your Operator project.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the previous command does not output platform information, then the specified base image might be a single image instead of an image manifest. You can find which architectures an image supports by running the following command:
docker inspect <image>
$ docker inspect <image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Go-based Operator projects, the Operator SDK explicitly references the
amd64architecture in your project’s Dockerfile. Make the following change to your Dockerfile to set an environment variable to the value specified by the platform flag:Example Dockerfile
FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH ... RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager main.goFROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH ... RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager main.go1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Change the
GOARCHfield fromamd64to$TARGETARCH.
Your Operator project’s makefile defines the
PLATFORMSenvironment variable. If your Operator’s images do not support all of the platforms set by default, edit the variable to specify the supported platforms. The following example defines the supported platforms aslinux/arm64andlinux/amd64:Example makefile
... ...
# ... PLATFORMS ?= linux/arm64,linux/amd641 .PHONY: docker-buildx # ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The following
PLATFORMSvalues are set by default:linux/arm64,linux/amd64,linux/s390x, andlinux/ppc64le.
When you run the
make docker buildxcommand to generate a manifest list, the Operator SDK creates an image manifest for each of the platforms specified by thePLATFORMSvariable.Run the following command from your Operator project directory to build your manager image. Running the command builds a manager image with multi-platform support and pushes the manifest list to your registry.
make docker-buildx \ IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>
$ make docker-buildx \ IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.16.2. About node affinity rules for multi-architecture compute machines and Operator workloads Copy linkLink copied to clipboard!
You must set node affinity rules to ensure your Operator workloads can run on multi-architecture compute machines. Node affinity is a set of rules used by the scheduler to define a pod’s placement. Setting node affinity rules ensures your Operator’s workloads are scheduled to compute machines with compatible architectures.
If your Operator performs better on particular architectures, you can set preferred node affinity rules to schedule pods to machines with the specified architectures.
For more information, see "About clusters with multi-architecture compute machines" and "Controlling pod placement on nodes using node affinity rules".
5.16.2.1. Using required node affinity rules to support multi-architecture compute machines for Operator projects Copy linkLink copied to clipboard!
If you want your Operator to support multi-architecture compute machines, you must define your Operator’s required node affinity rules.
Prerequisites
- An Operator project created or maintained with Operator SDK 1.31.0 or later.
- A manifest list defining the platforms your Operator supports.
Procedure
Search your Operator project for Kubernetes manifests that define pod spec and pod template spec objects.
ImportantBecause object type names are not declared in YAML files, look for the mandatory
containersfield in your Kubernetes manifests. Thecontainersfield is required when specifying both pod spec and pod template spec objects.You must set node affinity rules in all Kubernetes manifests that define a pod spec or pod template spec, including objects such as
Pod,Deployment,DaemonSet, andStatefulSet.Example Kubernetes manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the required node affinity rules in the Kubernetes manifests that define pod spec and pod template spec objects, similar to the following example:
Example Kubernetes manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines a required rule.
- 2
- If you specify multiple
nodeSelectorTermsassociated withnodeAffinitytypes, then the pod can be scheduled onto a node if one of thenodeSelectorTermsis satisfied. - 3
- If you specify multiple
matchExpressionsassociated withnodeSelectorTerms, then the pod can be scheduled onto a node only if allmatchExpressionsare satisfied. - 4
- Specifies the architectures defined in the manifest list.
- 5
- Specifies the operating systems defined in the manifest list.
Go-based Operator projects that use dynamically created workloads might embed pod spec and pod template spec objects in the Operator’s logic.
If your project embeds pod spec or pod template spec objects in the Operator’s logic, edit your Operator’s logic similar to the following example. The following example shows how to update a
PodSpecobject by using the Go API:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
RequiredDuringSchedulingIgnoredDuringExecution- Defines a required rule.
NodeSelectorTerms-
If you specify multiple
nodeSelectorTermsassociated withnodeAffinitytypes, then the pod can be scheduled onto a node if one of thenodeSelectorTermsis satisfied. MatchExpressions-
If you specify multiple
matchExpressionsassociated withnodeSelectorTerms, then the pod can be scheduled onto a node only if allmatchExpressionsare satisfied. kubernetes.io/arch- Specifies the architectures defined in the manifest list.
kubernetes.io/os- Specifies the operating systems defined in the manifest list.
If you do not set node affinity rules and a container is scheduled to a compute machine with an incompatible architecture, the pod fails and triggers one of the following events:
CrashLoopBackOff-
Occurs when an image manifest’s entry point fails to run and an
exec format errormessage is printed in the logs. ImagePullBackOff- Occurs when a manifest list does not include a manifest for the architecture where a pod is scheduled or the node affinity terms are set to the wrong values.
5.16.2.2. Using preferred node affinity rules to configure support for multi-architecture compute machines for Operator projects Copy linkLink copied to clipboard!
If your Operator performs better on particular architectures, you can configure preferred node affinity rules to schedule pods to nodes to the specified architectures.
Prerequisites
- An Operator project created or maintained with Operator SDK 1.31.0 or later.
- A manifest list defining the platforms your Operator supports.
- Required node affinity rules are set for your Operator project.
Procedure
Search your Operator project for Kubernetes manifests that define pod spec and pod template spec objects.
Example Kubernetes manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set your Operator’s preferred node affinity rules in the Kubernetes manifests that define pod spec and pod template spec objects, similar to the following example:
Example Kubernetes manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Defines a preferred rule.
- 2
- If you specify multiple
matchExpressionsassociated withnodeSelectorTerms, then the pod can be scheduled onto a node only if allmatchExpressionsare satisfied. - 3
- Specifies the architectures defined in the manifest list.
- 4
- Specifies an
operator. The Operator can beIn,NotIn,Exists, orDoesNotExist. For example, use the value ofInto require the label to be in the node. - 5
- Specifies a weight for the node, valid values are
1-100. The node with highest weight is preferred.
5.16.3. Next steps Copy linkLink copied to clipboard!
5.17. Object pruning utility for Go-based Operators Copy linkLink copied to clipboard!
The operator-lib pruning utility lets Go-based Operators clean up, or prune, objects when they are no longer needed. Operator authors can also use the utility to create custom hooks and strategies.
5.17.1. About the operator-lib pruning utility Copy linkLink copied to clipboard!
Objects, such as jobs or pods, are created as a normal part of the Operator life cycle. If the cluster administrator or the Operator does not remove these object, they can stay in the cluster and consume resources.
Previously, the following options were available for pruning unnecessary objects:
- Operator authors had to create a unique pruning solution for their Operators.
- Cluster administrators had to clean up objects on their own.
The operator-lib pruning utility removes objects from a Kubernetes cluster for a given namespace. The library was added in version 0.9.0 of the operator-lib library as part of the Operator Framework.
5.17.2. Pruning utility configuration Copy linkLink copied to clipboard!
The operator-lib pruning utility is written in Go and includes common pruning strategies for Go-based Operators.
Example configuration
The pruning utility configuration file defines pruning actions by using the following fields:
| Configuration field | Description |
|---|---|
|
| Logger used to handle library log messages. |
|
|
Boolean that determines whether resources should be removed. If set to |
|
| Client-go Kubernetes ClientSet used for Kubernetes API calls. |
|
| Kubernetes label selector expression used to find resources to prune. |
|
|
Kubernetes resource kinds. |
|
| List of Kubernetes namespaces to search for resources. |
|
| Pruning strategy to run. |
|
|
|
|
|
Integer value for |
|
|
Go |
|
| Go map of values that can be passed into a custom strategy function. |
|
| Optional: Go function to call before pruning a resource. |
|
| Optional: Go function that implements a custom pruning strategy. |
Pruning execution
You can call the pruning action by running the execute function on the pruning configuration.
err := cfg.Execute(ctx)
err := cfg.Execute(ctx)
You can also call a pruning action by using a cron package or by calling the pruning utility with a triggering event.
5.18. Migrating package manifest projects to bundle format Copy linkLink copied to clipboard!
Support for the legacy package manifest format for Operators is removed in OpenShift Container Platform 4.8 and later. If you have an Operator project that was initially created using the package manifest format, you can use the Operator SDK to migrate the project to the bundle format. The bundle format is the preferred packaging format for Operator Lifecycle Manager (OLM) starting in OpenShift Container Platform 4.6.
5.18.1. About packaging format migration Copy linkLink copied to clipboard!
The Operator SDK pkgman-to-bundle command helps in migrating Operator Lifecycle Manager (OLM) package manifests to bundles. The command takes an input package manifest directory and generates bundles for each of the versions of manifests present in the input directory. You can also then build bundle images for each of the generated bundles.
For example, consider the following packagemanifests/ directory for a project in the package manifest format:
Example package manifest format layout
After running the migration, the following bundles are generated in the bundle/ directory:
Example bundle format layout
Based on this generated layout, bundle images for both of the bundles are also built with the following names:
-
quay.io/example/etcd:0.0.1 -
quay.io/example/etcd:0.0.2
5.18.2. Migrating a package manifest project to bundle format Copy linkLink copied to clipboard!
Operator authors can use the Operator SDK to migrate a package manifest format Operator project to a bundle format project.
Prerequisites
- Operator SDK CLI installed
- Operator project initially generated using the Operator SDK in package manifest format
Procedure
Use the Operator SDK to migrate your package manifest project to the bundle format and generate bundle images:
operator-sdk pkgman-to-bundle <package_manifests_dir> \ [--output-dir <directory>] \ --image-tag-base <image_name_base>$ operator-sdk pkgman-to-bundle <package_manifests_dir> \1 [--output-dir <directory>] \2 --image-tag-base <image_name_base>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the location of the package manifests directory for the project, such as
packagemanifests/ormanifests/. - 2
- Optional: By default, the generated bundles are written locally to disk to the
bundle/directory. You can use the--output-dirflag to specify an alternative location. - 3
- Set the
--image-tag-baseflag to provide the base of the image name, such asquay.io/example/etcd, that will be used for the bundles. Provide the name without a tag, because the tag for the images will be set according to the bundle version. For example, the full bundle image names are generated in the format<image_name_base>:<bundle_version>.
Verification
Verify that the generated bundle image runs successfully:
operator-sdk run bundle <bundle_image_name>:<tag>
$ operator-sdk run bundle <bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.19. Operator SDK CLI reference Copy linkLink copied to clipboard!
The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier.
Operator SDK CLI syntax
operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
5.19.1. bundle Copy linkLink copied to clipboard!
The operator-sdk bundle command manages Operator bundle metadata.
5.19.1.1. validate Copy linkLink copied to clipboard!
The bundle validate subcommand validates an Operator bundle.
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are |
|
| List all optional validators available. When set, no validators are run. |
|
|
Label selector to select optional validators to run. When run with the |
5.19.2. cleanup Copy linkLink copied to clipboard!
The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command.
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Time to wait for the command to complete before failing. The default value is |
5.19.3. completion Copy linkLink copied to clipboard!
The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.
| Subcommand | Description |
|---|---|
|
| Generate bash completions. |
|
| Generate zsh completions. |
| Flag | Description |
|---|---|
|
| Usage help output. |
For example:
operator-sdk completion bash
$ operator-sdk completion bash
Example output
bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh
# bash completion for operator-sdk -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh
5.19.4. create Copy linkLink copied to clipboard!
The operator-sdk create command is used to create, or scaffold, a Kubernetes API.
5.19.4.1. api Copy linkLink copied to clipboard!
The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command.
| Flag | Description |
|---|---|
|
|
Help output for the |
5.19.5. generate Copy linkLink copied to clipboard!
The operator-sdk generate command invokes a specific generator to generate code or manifests.
5.19.5.1. bundle Copy linkLink copied to clipboard!
The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project.
Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence.
| Flag | Description |
|---|---|
|
|
Comma-separated list of channels to which the bundle belongs. The default value is |
|
|
Root directory for |
|
| The default channel for the bundle. |
|
|
Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the |
|
|
Help for |
|
|
Directory from which to read an existing bundle. This directory is the parent of your bundle |
|
|
Directory containing Kustomize bases and a |
|
| Generate bundle manifests. |
|
| Generate bundle metadata and Dockerfile. |
|
| Directory to write the bundle to. |
|
|
Overwrite the bundle metadata and Dockerfile if they exist. The default value is |
|
| Package name for the bundle. |
|
| Run in quiet mode. |
|
| Write bundle manifest to standard out. |
|
| Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. |
5.19.5.2. kustomize Copy linkLink copied to clipboard!
The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator.
5.19.5.2.1. manifests Copy linkLink copied to clipboard!
The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag.
| Flag | Description |
|---|---|
|
| Root directory for API type definitions. |
|
|
Help for |
|
| Directory containing existing Kustomize files. |
|
|
When set to |
|
| Directory where to write Kustomize files. |
|
| Package name. |
|
| Run in quiet mode. |
5.19.6. init Copy linkLink copied to clipboard!
The operator-sdk init command initializes an Operator project and generates, or scaffolds, a default project directory layout for the given plugin.
This command writes the following files:
- Boilerplate license file
-
PROJECTfile with the domain and repository -
Makefileto build the project -
go.modfile with project dependencies -
kustomization.yamlfile for customizing manifests - Patch file for customizing images for manager manifests
- Patch file for enabling Prometheus metrics
-
main.gofile to run
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Name and optionally version of the plugin to initialize the project with. Available plugins are |
|
|
Project version. Available values are |
5.19.7. run Copy linkLink copied to clipboard!
The operator-sdk run command provides options that can launch the Operator in various environments.
5.19.7.1. bundle Copy linkLink copied to clipboard!
The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM).
| Flag | Description |
|---|---|
|
|
Index image in which to inject a bundle. The default image is |
|
|
Install mode supported by the cluster service version (CSV) of the Operator, for example |
|
|
Install timeout. The default value is |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Specifies the security context to use for the catalog pod. Allowed values include |
|
|
Help output for the |
-
The
restrictedsecurity context is not compatible with thedefaultnamespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".
5.19.7.2. bundle-upgrade Copy linkLink copied to clipboard!
The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM).
| Flag | Description |
|---|---|
|
|
Upgrade timeout. The default value is |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Specifies the security context to use for the catalog pod. Allowed values include |
|
|
Help output for the |
-
The
restrictedsecurity context is not compatible with thedefaultnamespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".
5.19.8. scorecard Copy linkLink copied to clipboard!
The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely.
| Flag | Description |
|---|---|
|
|
Path to scorecard configuration file. The default path is |
|
|
Help output for the |
|
|
Path to |
|
| List which tests are available to run. |
|
| Namespace in which to run the test images. |
|
|
Output format for results. Available values are |
|
|
Option to run scorecard with the specified security context. Allowed values include |
|
| Label selector to determine which tests are run. |
|
|
Service account to use for tests. The default value is |
|
| Disable resource cleanup after tests are run. |
|
|
Seconds to wait for tests to complete, for example |
-
The
restrictedsecurity context is not compatible with thedefaultnamespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".
Chapter 6. Cluster Operators reference Copy linkLink copied to clipboard!
This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform.
Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration → Cluster Settings page.
Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators.
Some of the following cluster Operators can be disabled prior to installation. For more information see cluster capabilities.
6.1. Cluster Baremetal Operator Copy linkLink copied to clipboard!
The Cluster Baremetal Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action.
6.1.1. Project Copy linkLink copied to clipboard!
6.2. Bare Metal Event Relay Copy linkLink copied to clipboard!
The OpenShift Bare Metal Event Relay manages the life-cycle of the Bare Metal Event Relay. The Bare Metal Event Relay enables you to configure the types of cluster event that are monitored using Redfish hardware events.
Configuration objects
You can use this command to edit the configuration after installation: for example, the webhook port. You can edit configuration objects with:
oc -n [namespace] edit cm hw-event-proxy-operator-manager-config
$ oc -n [namespace] edit cm hw-event-proxy-operator-manager-config
Project
CRD
The proxy enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure, reported using the HardwareEvent CR.
hardwareevents.event.redhat-cne.org:
- Scope: Namespaced
- CR: HardwareEvent
- Validation: Yes
6.3. Cloud Credential Operator Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run.
By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string (""), the CCO operates in its default mode.
6.3.1. Project Copy linkLink copied to clipboard!
6.3.2. CRDs Copy linkLink copied to clipboard!
credentialsrequests.cloudcredential.openshift.io- Scope: Namespaced
-
CR:
CredentialsRequest - Validation: Yes
6.3.3. Configuration objects Copy linkLink copied to clipboard!
No configuration required.
6.4. Cluster Authentication Operator Copy linkLink copied to clipboard!
The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with:
oc get clusteroperator authentication -o yaml
$ oc get clusteroperator authentication -o yaml
6.4.1. Project Copy linkLink copied to clipboard!
6.5. Cluster Autoscaler Operator Copy linkLink copied to clipboard!
The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider.
6.5.1. Project Copy linkLink copied to clipboard!
6.5.2. CRDs Copy linkLink copied to clipboard!
-
ClusterAutoscaler: This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to theClusterAutoscalerresource nameddefaultin the managed namespace, the value of theWATCH_NAMESPACEenvironment variable. -
MachineAutoscaler: This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, theminandmaxsize. Currently onlyMachineSetobjects can be targeted.
6.6. Cluster Cloud Controller Manager Operator Copy linkLink copied to clipboard!
The status of this Operator is General Availability for Amazon Web Services (AWS), IBM Cloud®, global Microsoft Azure, Microsoft Azure Stack Hub, Nutanix, Red Hat OpenStack Platform (RHOSP), and VMware vSphere.
The Operator is available as a Technology Preview for Alibaba Cloud, Google Cloud Platform (GCP), and IBM Cloud® Power VS.
The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. You can install the Cloud Controller Manager Operator by using the Cluster Version Operator (CVO).
The Cloud Controller Manager Operator includes the following components:
- Operator
- Cloud configuration observer
By default, the Operator exposes Prometheus metrics through the metrics service.
6.6.1. Project Copy linkLink copied to clipboard!
6.7. Cluster CAPI Operator Copy linkLink copied to clipboard!
The Cluster CAPI Operator maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster.
This Operator is available as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters.
6.7.1. Project Copy linkLink copied to clipboard!
6.7.2. CRDs Copy linkLink copied to clipboard!
awsmachines.infrastructure.cluster.x-k8s.io- Scope: Namespaced
-
CR:
awsmachine - Validation: No
gcpmachines.infrastructure.cluster.x-k8s.io- Scope: Namespaced
-
CR:
gcpmachine - Validation: No
awsmachinetemplates.infrastructure.cluster.x-k8s.io- Scope: Namespaced
-
CR:
awsmachinetemplate - Validation: No
gcpmachinetemplates.infrastructure.cluster.x-k8s.io- Scope: Namespaced
-
CR:
gcpmachinetemplate - Validation: No
6.8. Cluster Config Operator Copy linkLink copied to clipboard!
The Cluster Config Operator performs the following tasks related to config.openshift.io:
- Creates CRDs.
- Renders the initial custom resources.
- Handles migrations.
6.8.1. Project Copy linkLink copied to clipboard!
6.9. Cluster CSI Snapshot Controller Operator Copy linkLink copied to clipboard!
The Cluster CSI Snapshot Controller Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots.
6.9.1. Project Copy linkLink copied to clipboard!
6.10. Cluster Image Registry Operator Copy linkLink copied to clipboard!
The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage.
On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider.
If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing.
The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace.
6.10.1. Project Copy linkLink copied to clipboard!
6.11. Cluster Machine Approver Operator Copy linkLink copied to clipboard!
The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation.
For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase.
6.11.1. Project Copy linkLink copied to clipboard!
6.12. Cluster Monitoring Operator Copy linkLink copied to clipboard!
The Cluster Monitoring Operator (CMO) manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform.
Project
CRDs
alertmanagers.monitoring.coreos.com- Scope: Namespaced
-
CR:
alertmanager - Validation: Yes
prometheuses.monitoring.coreos.com- Scope: Namespaced
-
CR:
prometheus - Validation: Yes
prometheusrules.monitoring.coreos.com- Scope: Namespaced
-
CR:
prometheusrule - Validation: Yes
servicemonitors.monitoring.coreos.com- Scope: Namespaced
-
CR:
servicemonitor - Validation: Yes
Configuration objects
oc -n openshift-monitoring edit cm cluster-monitoring-config
$ oc -n openshift-monitoring edit cm cluster-monitoring-config
6.13. Cluster Network Operator Copy linkLink copied to clipboard!
The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster.
6.14. Cluster Samples Operator Copy linkLink copied to clipboard!
The Cluster Samples Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace.
On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples.
The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io. Similarly, the templates are those categorized as OpenShift Container Platform templates.
The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the OpenShift image registry and API server to authenticate with registry.redhat.io. An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import.
The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples.
Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly.
The samples resource includes a finalizer, which cleans up the following upon its deletion:
- Operator-managed image streams
- Operator-managed templates
- Operator-generated configuration resources
- Cluster status resources
Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.
6.14.1. Project Copy linkLink copied to clipboard!
6.15. Cluster Storage Operator Copy linkLink copied to clipboard!
The Cluster Storage Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends.
6.15.1. Project Copy linkLink copied to clipboard!
6.15.2. Configuration Copy linkLink copied to clipboard!
No configuration is required.
6.15.3. Notes Copy linkLink copied to clipboard!
- The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs.
6.16. Cluster Version Operator Copy linkLink copied to clipboard!
Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default.
The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph by collecting the status of both the cluster version and its cluster Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster.
For more information regarding cluster version condition types, see "Understanding cluster version condition types".
6.16.1. Project Copy linkLink copied to clipboard!
6.17. Console Operator Copy linkLink copied to clipboard!
The Console Operator is an optional cluster capability that can be disabled by cluster administrators during installation. If you disable the Console Operator at installation, your cluster is still supported and upgradable. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console.
6.17.1. Project Copy linkLink copied to clipboard!
6.18. Control Plane Machine Set Operator Copy linkLink copied to clipboard!
The Control Plane Machine Set Operator automates the management of control plane machine resources within an OpenShift Container Platform cluster.
This Operator is available for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere.
6.18.1. Project Copy linkLink copied to clipboard!
6.18.2. CRDs Copy linkLink copied to clipboard!
controlplanemachineset.machine.openshift.io- Scope: Namespaced
-
CR:
ControlPlaneMachineSet - Validation: Yes
6.19. DNS Operator Copy linkLink copied to clipboard!
The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform.
The Operator creates a working default deployment based on the cluster’s configuration.
-
The default cluster domain is
cluster.local. - Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported.
The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster.
6.19.1. Project Copy linkLink copied to clipboard!
6.20. etcd cluster Operator Copy linkLink copied to clipboard!
The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures.
6.20.1. Project Copy linkLink copied to clipboard!
6.20.2. CRDs Copy linkLink copied to clipboard!
etcds.operator.openshift.io- Scope: Cluster
-
CR:
etcd - Validation: Yes
6.20.3. Configuration objects Copy linkLink copied to clipboard!
oc edit etcd cluster
$ oc edit etcd cluster
6.21. Ingress Operator Copy linkLink copied to clipboard!
The Ingress Operator configures and manages the OpenShift Container Platform router.
6.21.1. Project Copy linkLink copied to clipboard!
6.21.2. CRDs Copy linkLink copied to clipboard!
clusteringresses.ingress.openshift.io- Scope: Namespaced
-
CR:
clusteringresses - Validation: No
6.21.3. Configuration objects Copy linkLink copied to clipboard!
Cluster config
-
Type Name:
clusteringresses.ingress.openshift.io -
Instance Name:
default View Command:
oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml
$ oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Type Name:
6.21.4. Notes Copy linkLink copied to clipboard!
The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router:
oc get deployment -n openshift-ingress
$ oc get deployment -n openshift-ingress
The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr, then the Ingress Controller operates in IPv6-only mode.
In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr:
oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'
$ oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'
Example output
map[cidr:10.128.0.0/14 hostPrefix:23]
map[cidr:10.128.0.0/14 hostPrefix:23]
6.22. Insights Operator Copy linkLink copied to clipboard!
The Insights Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through the Insights advisor service on console.redhat.com.
6.22.1. Project Copy linkLink copied to clipboard!
6.22.2. Configuration Copy linkLink copied to clipboard!
No configuration is required.
6.22.3. Notes Copy linkLink copied to clipboard!
Insights Operator complements OpenShift Container Platform Telemetry.
6.23. Kubernetes API Server Operator Copy linkLink copied to clipboard!
The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO).
6.23.1. Project Copy linkLink copied to clipboard!
6.23.2. CRDs Copy linkLink copied to clipboard!
kubeapiservers.operator.openshift.io- Scope: Cluster
-
CR:
kubeapiserver - Validation: Yes
6.23.3. Configuration objects Copy linkLink copied to clipboard!
oc edit kubeapiserver
$ oc edit kubeapiserver
6.24. Kubernetes Controller Manager Operator Copy linkLink copied to clipboard!
The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO).
It contains the following components:
- Operator
- Bootstrap manifest renderer
- Installer based on static pods
- Configuration observer
By default, the Operator exposes Prometheus metrics through the metrics service.
6.24.1. Project Copy linkLink copied to clipboard!
6.25. Kubernetes Scheduler Operator Copy linkLink copied to clipboard!
The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO).
The Kubernetes Scheduler Operator contains the following components:
- Operator
- Bootstrap manifest renderer
- Installer based on static pods
- Configuration observer
By default, the Operator exposes Prometheus metrics through the metrics service.
6.25.1. Project Copy linkLink copied to clipboard!
6.25.2. Configuration Copy linkLink copied to clipboard!
The configuration for the Kubernetes Scheduler is the result of merging:
- a default configuration.
-
an observed configuration from the spec
schedulers.config.openshift.io.
All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end.
6.26. Kubernetes Storage Version Migrator Operator Copy linkLink copied to clipboard!
The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests.
6.26.1. Project Copy linkLink copied to clipboard!
6.27. Machine API Operator Copy linkLink copied to clipboard!
The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster.
6.27.1. Project Copy linkLink copied to clipboard!
6.27.2. CRDs Copy linkLink copied to clipboard!
-
MachineSet -
Machine -
MachineHealthCheck
6.28. Machine Config Operator Copy linkLink copied to clipboard!
The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet.
There are four components:
-
machine-config-server: Provides Ignition configuration to new machines joining the cluster. -
machine-config-controller: Coordinates the upgrade of machines to the desired configurations defined by aMachineConfigobject. Options are provided to control the upgrade for sets of machines individually. -
machine-config-daemon: Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. -
machine-config: Provides a complete source of machine configuration at installation, first start up, and updates for a machine.
Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates.
To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies.
Additional resources
6.28.1. Project Copy linkLink copied to clipboard!
6.29. Marketplace Operator Copy linkLink copied to clipboard!
The Marketplace Operator is an optional cluster capability that can be disabled by cluster administrators if it is not needed. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing.
The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster.
6.29.1. Project Copy linkLink copied to clipboard!
6.30. Node Tuning Operator Copy linkLink copied to clipboard!
The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs.
The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.
Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal.
The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications.
The cluster administrator configures a performance profile to define node-level settings such as the following:
- Updating the kernel to kernel-rt.
- Choosing CPUs for housekeeping.
- Choosing CPUs for running workloads.
Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles.
The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later.
In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator.
6.30.1. Project Copy linkLink copied to clipboard!
6.31. OpenShift API Server Operator Copy linkLink copied to clipboard!
The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster.
6.31.1. Project Copy linkLink copied to clipboard!
6.31.2. CRDs Copy linkLink copied to clipboard!
openshiftapiservers.operator.openshift.io- Scope: Cluster
-
CR:
openshiftapiserver - Validation: Yes
6.32. OpenShift Controller Manager Operator Copy linkLink copied to clipboard!
The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with:
oc get clusteroperator openshift-controller-manager -o yaml
$ oc get clusteroperator openshift-controller-manager -o yaml
The custom resource definition (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with:
oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml
$ oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml
6.32.1. Project Copy linkLink copied to clipboard!
6.33. Operator Lifecycle Manager Operators Copy linkLink copied to clipboard!
6.33.1. What is Operator Lifecycle Manager? Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework, an open source toolkit designed to manage Operators in an effective, automated, and scalable way.
Figure 6.1. Operator Lifecycle Manager workflow
OLM runs by default in OpenShift Container Platform 4.14, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.
For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.
6.33.2. CRDs Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator.
Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework:
| Resource | Short name | Owner | Description |
|---|---|---|---|
|
|
| OLM | Application metadata: name, version, icon, required resources, installation, and so on. |
|
|
| Catalog | Calculated list of resources to be created to automatically install or upgrade a CSV. |
|
|
| Catalog | A repository of CSVs, CRDs, and packages that define an application. |
|
|
| Catalog | Used to keep CSVs up to date by tracking a channel in a package. |
|
|
| OLM |
Configures all Operators deployed in the same namespace as the |
Each of these Operators is also responsible for creating the following resources:
| Resource | Owner |
|---|---|
|
| OLM |
|
| |
|
| |
|
| |
|
| Catalog |
|
|
6.33.3. OLM Operator Copy linkLink copied to clipboard!
The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster.
The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application.
The OLM Operator uses the following workflow:
- Watch for cluster service versions (CSVs) in a namespace and check that requirements are met.
If requirements are met, run the install strategy for the CSV.
NoteA CSV must be an active member of an Operator group for the install strategy to run.
6.33.4. Catalog Operator Copy linkLink copied to clipboard!
The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions.
To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user.
The Catalog Operator uses the following workflow:
- Connect to each catalog source in the cluster.
Watch for unresolved install plans created by a user, and if found:
- Find the CSV matching the name requested and add the CSV as a resolved resource.
- For each managed or required CRD, add the CRD as a resolved resource.
- For each required CRD, find the CSV that manages it.
- Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically.
- Watch for catalog sources and subscriptions and create install plans based on them.
6.33.5. Catalog Registry Copy linkLink copied to clipboard!
The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels.
A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version.
6.34. OpenShift Service CA Operator Copy linkLink copied to clipboard!
The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services.
6.34.1. Project Copy linkLink copied to clipboard!
6.35. vSphere Problem Detector Operator Copy linkLink copied to clipboard!
The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage.
The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere.
6.35.1. Configuration Copy linkLink copied to clipboard!
No configuration is required.
6.35.2. Notes Copy linkLink copied to clipboard!
- The Operator supports OpenShift Container Platform installations on vSphere.
-
The Operator uses the
vsphere-cloud-credentialsto communicate with vSphere. - The Operator performs checks that are related to storage.
Chapter 7. OLM 1.0 (Technology Preview) Copy linkLink copied to clipboard!
7.1. About Operator Lifecycle Manager 1.0 (Technology Preview) Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as OLM 1.0. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
During this Technology Preview phase of OLM 1.0 in OpenShift Container Platform 4.14, administrators can explore the following features:
- Fully declarative model that supports GitOps workflows
OLM 1.0 simplifies Operator management through two key APIs:
-
A new
OperatorAPI, provided asoperator.operators.operatorframework.ioby the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles. -
The
CatalogAPI, provided by the new catalogd component, serves as the foundation for OLM 1.0, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
For more information, see Operator Controller and Catalogd.
-
A new
- Improved control over Operator updates
- With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see Updating an Operator.
- Flexible Operator packaging format
Administrators can use file-based catalogs to install and manage the following types of content:
- OLM-based Operators, similar to the existing OLM experience
- Plain bundles, which are static collections of arbitrary Kubernetes manifests
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see Installing an Operator from a catalog and Managing plain bundles.
7.1.1. Purpose Copy linkLink copied to clipboard!
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.
The initial version of OLM, which launched with OpenShift Container Platform 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as CustomResourceDefinition (CRD) objects, to provide additional functionality to the cluster.
After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.
7.2. Components and architecture Copy linkLink copied to clipboard!
7.2.1. OLM 1.0 components overview (Technology Preview) Copy linkLink copied to clipboard!
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator Lifecycle Manager (OLM) 1.0 comprises the following component projects:
- Operator Controller
- Operator Controller is the central component of OLM 1.0 that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
- RukPak
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
- Catalogd
- Catalogd is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the OLM 1.0 microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.
7.2.2. Operator Controller (Technology Preview) Copy linkLink copied to clipboard!
Operator Controller is the central component of Operator Lifecycle Manager (OLM) 1.0 and consumes the other OLM 1.0 components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.2.1. Operator API Copy linkLink copied to clipboard!
Operator Controller provides a new Operator API object, which is a single resource that represents an instance of an installed Operator. This operator.operators.operatorframework.io API streamlines management of installed Operators by consolidating user-facing APIs into a single object.
In OLM 1.0, Operator objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related Subscription and OperatorGroup objects.
For more information about the earlier behavior, see Multitenancy and Operator colocation.
Example Operator object
When using the OpenShift CLI (oc), the Operator resource provided with OLM 1.0 during this Technology Preview phase requires specifying the full <resource>.<group> format: operator.operators.operatorframework.io. For example:
oc get operator.operators.operatorframework.io
$ oc get operator.operators.operatorframework.io
If you specify only the Operator resource without the API group, the CLI returns results for an earlier API (operator.operators.coreos.com) that is unrelated to OLM 1.0.
7.2.2.1.1. About target versions in OLM 1.0 Copy linkLink copied to clipboard!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
oc apply -f <extension_name>.yaml
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
oc get operator.operators.operatorframework.io <operator_name> -o yaml
$ oc get operator.operators.operatorframework.io <operator_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Rukpak (Technology Preview) Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) 1.0 uses the RukPak component and its resources to manage cloud-native content.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.3.1. About RukPak Copy linkLink copied to clipboard!
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
Common terminology
- Bundle
- A collection of Kubernetes manifests that define content to be deployed to a cluster
- Bundle image
- A container image that contains a bundle within its filesystem
- Bundle Git repository
- A Git repository that contains a bundle within a directory
- Provisioner
- Controllers that install and manage content on a Kubernetes cluster
- Bundle deployment
- Generates deployed instances of a bundle
7.2.3.2. About provisioners Copy linkLink copied to clipboard!
RukPak consists of a series of controllers, known as provisioners, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles.
Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.
7.2.3.3. Bundle Copy linkLink copied to clipboard!
A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content.
Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type.
Example Bundle object configured to work with the plain provisioner
Bundles are considered immutable after they are created.
7.2.3.3.1. Bundle immutability Copy linkLink copied to clipboard!
After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object.
Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status, are updated during the bundle’s lifecycle; it is only the spec field that is considered immutable.
Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle:
Example output
bundle.core.rukpak.io/combo-tag-ref created
bundle.core.rukpak.io/combo-tag-ref created
Then, patching the bundle to point to a newer tag returns an error:
oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
$ oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}'
Example output
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable
The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place.
7.2.3.3.1.1. Further immutability considerations Copy linkLink copied to clipboard!
While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario:
-
A user sets an image tag, a Git branch, or a Git tag in the
spec.sourcefield of theBundleobject. - The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit.
- A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod.
If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content.
This is similar to pod behavior, where one of the pod’s container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it.
To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle.
7.2.3.3.2. Plain bundle spec Copy linkLink copied to clipboard!
A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory.
The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0, combines the type of bundle (plain) with the current schema version (v0).
The plain+v0 bundle format is at schema version v0, which means it is an experimental format that is subject to change.
For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application.
Example plain+v0 bundle file tree
The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories.
Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well.
7.2.3.3.3. Registry bundle spec Copy linkLink copied to clipboard!
A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format.
7.2.3.4. BundleDeployment Copy linkLink copied to clipboard!
A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions.
The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle.
Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept.
The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment.
Example BundleDeployment object configured to work with the plain provisioner
7.2.4. Dependency resolution in OLM 1.0 (Technology Preview) Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) 1.0 uses a dependency manager for resolving constraints over catalogs of RukPak bundles.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.4.1. Concepts Copy linkLink copied to clipboard!
There are a set of expectations from the user that the package manager should never do the following:
- Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
- Install a package whose constraints can not be met by the current set of installable packages
- Update a package in a way that breaks another that depends on it
7.2.4.1.1. Example: Successful resolution Copy linkLink copied to clipboard!
A user wants to install packages A and B that have the following dependencies:
|
Package A |
Package B |
| ↓ (depends on) | ↓ (depends on) |
|
Package C |
Package D |
Additionally, the user wants to pin the version of A to v0.1.0.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0depends on Cv0.1.0 -
A pinned to
v0.1.0 - B depends on D
Output
Resolution set:
-
A
v0.1.0 -
B
latest -
C
v0.1.0 -
D
latest
-
A
7.2.4.1.2. Example: Unsuccessful resolution Copy linkLink copied to clipboard!
A user wants to install packages A and B that have the following dependencies:
|
Package A |
Package B |
| ↓ (depends on) | ↓ (depends on) |
|
Package C |
Package C |
Additionally, the user wants to pin the version of A to v0.1.0.
Packages and constraints passed to OLM 1.0
Packages
- A
- B
Constraints
-
A
v0.1.0depends on Cv0.1.0 -
A pinned to
v0.1.0 -
B
latestdepends on Cv0.2.0
Output
Resolution set:
-
Unable to resolve because A
v0.1.0requires Cv0.1.0, which conflicts with Blatestrequiring Cv0.2.0
-
Unable to resolve because A
7.2.5. Catalogd (Technology Preview) Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) 1.0 uses the catalogd component and its resources to manage Operator and extension catalogs.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.2.5.1. About catalogs in OLM 1.0 Copy linkLink copied to clipboard!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.2.5.1.1. Red Hat-provided Operator catalogs in OLM 1.0 Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
Example Certified Operators catalog
Example Community Operators catalog
The following command adds a catalog to your cluster:
Command syntax
oc apply -f <catalog_name>.yaml
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml.
7.3. Installing an Operator from a catalog in OLM 1.0 (Technology Preview) Copy linkLink copied to clipboard!
Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
7.3.1. Prerequisites Copy linkLink copied to clipboard!
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissionsNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgradefeature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgradefeature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc) installed on your workstation
7.3.2. About catalogs in OLM 1.0 Copy linkLink copied to clipboard!
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
7.3.2.1. Red Hat-provided Operator catalogs in OLM 1.0 Copy linkLink copied to clipboard!
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
Example Red Hat Operators catalog
Example Certified Operators catalog
Example Community Operators catalog
The following command adds a catalog to your cluster:
Command syntax
oc apply -f <catalog_name>.yaml
$ oc apply -f <catalog_name>.yaml
- 1
- Specifies the catalog CR, such as
redhat-operators.yaml.
The following procedures use the Red Hat Operators catalog and the Quay Operator as examples.
7.3.3. About target versions in OLM 1.0 Copy linkLink copied to clipboard!
In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).
If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
Example CR with a specified channel
- 1
- Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.
Example CR with the target version specified
- 1
- Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.
If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:
Command syntax
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yaml
After you create or update a CR, create or configure the Operator by running the following command:
Command syntax
oc apply -f <extension_name>.yaml
$ oc apply -f <extension_name>.yaml
Troubleshooting
If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:
oc get operator.operators.operatorframework.io <operator_name> -o yaml
$ oc get operator.operators.operatorframework.io <operator_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.4. Adding a catalog to a cluster Copy linkLink copied to clipboard!
To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.
Procedure
Create a catalog custom resource (CR), similar to the following example:
Example
redhat-operators.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the catalog’s image in the
spec.source.imagefield.
Add the catalog to your cluster by running the following command:
oc apply -f redhat-operators.yaml
$ oc apply -f redhat-operators.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
catalog.catalogd.operatorframework.io/redhat-operators created
catalog.catalogd.operatorframework.io/redhat-operators createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following commands to verify the status of your catalog:
Check if you catalog is available by running the following command:
oc get catalog
$ oc get catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE redhat-operators 20s
NAME AGE redhat-operators 20sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of your catalog by running the following command:
oc get catalogs.catalogd.operatorframework.io -o yaml
$ oc get catalogs.catalogd.operatorframework.io -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.5. Finding Operators to install from a catalog Copy linkLink copied to clipboard!
After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install.
Prerequisite
- You have added a catalog to your cluster.
Procedure
Get a list of the Operators and extensions in the catalog by running the following command:
oc get packages
$ oc get packagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 7.1. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect the contents of an Operator or extension’s custom resource (CR) by running the following command:
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get package redhat-operators-quay-operator -o yaml
$ oc get package redhat-operators-quay-operator -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 7.2. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.6. Installing an Operator Copy linkLink copied to clipboard!
You can install an Operator from a catalog by creating an Operator custom resource (CR) and applying it to the cluster.
Prerequisite
- You have added a catalog to your cluster.
- You have inspected the details of an Operator to find what version you want to install.
Procedure
Create an Operator CR, similar to the following example:
Example
test-operator.yamlCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the Operator CR to the cluster by running the following command:
oc apply -f test-operator.yaml
$ oc apply -f test-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example created
operator.operators.operatorframework.io/quay-example createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the Operator’s CR in the YAML format by running the following command:
oc get operator.operators.operatorframework.io/quay-example -o yaml
$ oc get operator.operators.operatorframework.io/quay-example -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get information about your Operator’s controller manager pod by running the following command:
oc get pod -n quay-operator-system
$ oc get pod -n quay-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28s
NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.7. Updating an Operator Copy linkLink copied to clipboard!
You can update your Operator by manually editing your Operator’s custom resource (CR) and applying the changes.
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Inspect your Operator’s package contents to find which channels and versions are available for updating by running the following command:
oc get package <catalog_name>-<package_name> -o yaml
$ oc get package <catalog_name>-<package_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc get package redhat-operators-quay-operator -o yaml
$ oc get package redhat-operators-quay-operator -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit your Operator’s CR to update the version to
3.9.1, as shown in the following example:Example
test-operator.yamlCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the version to
3.9.1
Apply the update to the cluster by running the following command:
oc apply -f test-operator.yaml
$ oc apply -f test-operator.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example configured
operator.operators.operatorframework.io/quay-example configuredCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can patch and apply the changes to your Operator’s version from the CLI by running the following command:
oc patch operator.operators.operatorframework.io/quay-example -p \ '{"spec":{"version":"3.9.1"}}' \ --type=merge$ oc patch operator.operators.operatorframework.io/quay-example -p \ '{"spec":{"version":"3.9.1"}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io/quay-example patched
operator.operators.operatorframework.io/quay-example patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the channel and version updates have been applied by running the following command:
oc get operator.operators.operatorframework.io/quay-example -o yaml
$ oc get operator.operators.operatorframework.io/quay-example -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Verify that the version is updated to
3.9.1.
7.3.8. Deleting an Operator Copy linkLink copied to clipboard!
You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).
Prerequisites
- You have a catalog installed.
- You have an Operator installed.
Procedure
Delete an Operator and its CRDs by running the following command:
oc delete operator.operators.operatorframework.io quay-example
$ oc delete operator.operators.operatorframework.io quay-exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator.operators.operatorframework.io "quay-example" deleted
operator.operators.operatorframework.io "quay-example" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Run the following commands to verify that your Operator and its resources were deleted:
Verify the Operator is deleted by running the following command:
oc get operator.operators.operatorframework.io
$ oc get operator.operators.operatorframework.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
No resources found
No resources foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Operator’s system namespace is deleted by running the following command:
oc get ns quay-operator-system
$ oc get ns quay-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Error from server (NotFound): namespaces "quay-operator-system" not found
Error from server (NotFound): namespaces "quay-operator-system" not foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.9. Deleting a catalog Copy linkLink copied to clipboard!
You can delete a catalog by deleting its custom resource (CR).
Prerequisites
- You have a catalog installed.
Procedure
Delete a catalog by running the following command:
oc delete catalog <catalog_name>
$ oc delete catalog <catalog_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
catalog.catalogd.operatorframework.io "my-catalog" deleted
catalog.catalogd.operatorframework.io "my-catalog" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the catalog is deleted by running the following command:
oc get catalog
$ oc get catalogCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Managing plain bundles in OLM 1.0 (Technology Preview) Copy linkLink copied to clipboard!
In Operator Lifecycle Manager (OLM) 1.0, a plain bundle is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental olm.bundle.mediatype property of the olm.bundle schema object differentiates a plain bundle (plain+v0) from a regular (registry+v1) bundle.
OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
- Build a plain bundle image.
- Create a file-based catalog.
- Add the plain bundle image to your file-based catalog.
- Build your catalog as an image.
- Publish your catalog image.
7.4.1. Prerequisites Copy linkLink copied to clipboard!
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissionsNoteFor OpenShift Container Platform 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.
The
TechPreviewNoUpgradefeature set enabled on the clusterWarningEnabling the
TechPreviewNoUpgradefeature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.-
The OpenShift CLI (
oc) installed on your workstation -
The
opmCLI installed on your workstation - Docker or Podman installed on your workstation
- Push access to a container registry, such as Quay
Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
Example directory structure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.2. Building a plain bundle image from an image source Copy linkLink copied to clipboard!
The Operator Controller currently supports installing plain bundles created only from a plain bundle image.
Procedure
At the root of your project, create a Dockerfile that can build a bundle image:
Example
plainbundle.DockerfileFROM scratch ADD manifests /manifests
FROM scratch1 ADD manifests /manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use the
FROM scratchdirective to make the size of the image smaller. No other files or directories are required in the bundle image.
Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:
podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .$ podman build -f plainbundle.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Use an image tag that references a repository where you have push access privileges.
Push the image to your remote registry by running the following command:
podman push quay.io/<organization_name>/<repository_name>:<image_tag>
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.3. Creating a file-based catalog Copy linkLink copied to clipboard!
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
Procedure
Create a directory for the catalog by running the following command:
mkdir <catalog_dir>
$ mkdir <catalog_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a Dockerfile that can build a catalog image by running the
opm generate dockerfilecommand in the same directory level as the previous step:opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 -i registry.redhat.io/openshift4/ose-operator-registry:v4.14$ opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry:v4.141 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the official Red Hat base image by using the
-iflag, otherwise the Dockerfile uses the default upstream image.
NoteThe generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
Example directory structure
. ├── <catalog_dir> └── <catalog_dir>.Dockerfile
. ├── <catalog_dir> └── <catalog_dir>.DockerfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Populate the catalog with the package definition for your extension by running the
opm initcommand:opm init <extension_name> \ --output json \ > <catalog_dir>/index.json$ opm init <extension_name> \ --output json \ > <catalog_dir>/index.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command generates an
olm.packagedeclarative config blob in the specified catalog configuration file.
7.4.4. Adding a plain bundle to a file-based catalog Copy linkLink copied to clipboard!
The opm render command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.
Procedure
Verify that the
index.jsonorindex.yamlfile for your catalog is similar to the following example:Example
<catalog_dir>/index.jsonfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create an
olm.bundleblob, edit yourindex.jsonorindex.yamlfile, similar to the following example:Example
<catalog_dir>/index.jsonfile witholm.bundleblobCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create an
olm.channelblob, edit yourindex.jsonorindex.yamlfile, similar to the following example:Example
<catalog_dir>/index.jsonfile witholm.channelblobCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Open your
index.jsonorindex.yamlfile and ensure it is similar to the following example:Example
<catalog_dir>/index.jsonfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate your catalog by running the following command:
opm validate <catalog_dir>
$ opm validate <catalog_dir>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4.5. Building and publishing a file-based catalog Copy linkLink copied to clipboard!
Procedure
Build your file-based catalog as an image by running the following command:
podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> . quay.io/<organization_name>/<repository_name>:<image_tag> .$ podman build -f <catalog_dir>.Dockerfile -t \ quay.io/<organization_name>/<repository_name>:<image_tag> .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push your catalog image by running the following command:
podman push quay.io/<organization_name>/<repository_name>:<image_tag>
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.