此内容没有您所选择的语言版本。

Chapter 5. Developing Operators


5.1. About the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.

Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.

The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

Why use the Operator SDK?

The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.

The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:

  • High-level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to quickly bootstrap a new project
  • Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
  • Extensions to cover common Operator use cases
  • Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed

Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

Note

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0.

5.1.1. What are Operators?

For an overview about basic Operator concepts and terminology, see Understanding Operators.

5.1.2. Development workflow

The Operator SDK provides the following workflow to develop a new Operator:

  1. Create an Operator project by using the Operator SDK command-line interface (CLI).
  2. Define new resource APIs by adding custom resource definitions (CRDs).
  3. Specify resources to watch by using the Operator SDK API.
  4. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
  5. Use the Operator SDK CLI to build and generate the Operator deployment manifests.

Figure 5.1. Operator SDK workflow

osdk workflow

At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.

5.1.3. Additional resources

5.2. Installing the Operator SDK CLI

The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

Note

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0.

5.2.1. Installing the Operator SDK CLI on Linux

You can install the OpenShift SDK CLI tool on Linux.

Prerequisites

  • Go v1.19+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+

Procedure

  1. Navigate to the OpenShift mirror site.
  2. From the latest 4.16 directory, download the latest version of the tarball for Linux.
  3. Unpack the archive:

    $ tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz
  4. Make the file executable:

    $ chmod +x operator-sdk
  5. Move the extracted operator-sdk binary to a directory that is on your PATH.

    Tip

    To check your PATH:

    $ echo $PATH
    $ sudo mv ./operator-sdk /usr/local/bin/operator-sdk

Verification

  • After you install the Operator SDK CLI, verify that it is available:

    $ operator-sdk version

    Example output

    operator-sdk version: "v1.31.0-ocp", ...

5.2.2. Installing the Operator SDK CLI on macOS

You can install the OpenShift SDK CLI tool on macOS.

Prerequisites

  • Go v1.19+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+

Procedure

  1. For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively.
  2. From the latest 4.16 directory, download the latest version of the tarball for macOS.
  3. Unpack the Operator SDK archive for amd64 architecture by running the following command:

    $ tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz
  4. Unpack the Operator SDK archive for arm64 architecture by running the following command:

    $ tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz
  5. Make the file executable by running the following command:

    $ chmod +x operator-sdk
  6. Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command:

    Tip

    Check your PATH by running the following command:

    $ echo $PATH
    $ sudo mv ./operator-sdk /usr/local/bin/operator-sdk

Verification

  • After you install the Operator SDK CLI, verify that it is available by running the following command::

    $ operator-sdk version

    Example output

    operator-sdk version: "v1.31.0-ocp", ...

5.3. Go-based Operators

5.3.1. Getting started with Operator SDK for Go-based Operators

To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.3.1.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Go 1.21+
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.3.1.2. Creating and deploying Go-based Operators

You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.

Procedure

  1. Create a project.

    1. Create your project directory:

      $ mkdir memcached-operator
    2. Change into the project directory:

      $ cd memcached-operator
    3. Run the operator-sdk init command to initialize the project:

      $ operator-sdk init \
          --domain=example.com \
          --repo=github.com/example-inc/memcached-operator

      The command uses the Go plugin by default.

  2. Create an API.

    Create a simple Memcached API:

    $ operator-sdk create api \
        --resource=true \
        --controller=true \
        --group cache \
        --version v1 \
        --kind Memcached
  3. Build and push the Operator image.

    Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to:

    $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
  4. Run the Operator.

    1. Install the CRD:

      $ make install
    2. Deploy the project to the cluster. Set IMG to the image that you pushed:

      $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  5. Create a sample custom resource (CR).

    1. Create a sample CR:

      $ oc apply -f config/samples/cache_v1_memcached.yaml \
          -n memcached-operator-system
    2. Watch for the CR to reconcile the Operator:

      $ oc logs deployment.apps/memcached-operator-controller-manager \
          -c manager \
          -n memcached-operator-system
  6. Delete a CR.

    Delete a CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
  7. Clean up.

    Run the following command to clean up the resources that have been created as part of this procedure:

    $ make undeploy

5.3.1.3. Next steps

5.3.2. Operator SDK tutorial for Go-based Operators

Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.

5.3.2.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Go 1.21+
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.3.2.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/memcached-operator
  2. Change to the directory:

    $ cd $HOME/projects/memcached-operator
  3. Activate support for Go modules:

    $ export GO111MODULE=on
  4. Run the operator-sdk init command to initialize the project:

    $ operator-sdk init \
        --domain=example.com \
        --repo=github.com/example-inc/memcached-operator
    Note

    The operator-sdk init command uses the Go plugin by default.

    The operator-sdk init command generates a go.mod file to be used with Go modules. The --repo flag is required when creating a project outside of $GOPATH/src/, because generated files require a valid module path.

5.3.2.2.1. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example:

domain: example.com
layout:
- go.kubebuilder.io/v3
projectName: memcached-operator
repo: github.com/example-inc/memcached-operator
version: "3"
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
5.3.2.2.2. About the Manager

The main program for the Operator is the main.go file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.

The Manager can restrict the namespace that all controllers watch for resources:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})

By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})

You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:

var namespaces []string 1
mgr, err := ctrl.NewManager(cfg, manager.Options{ 2
   NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
1
List of namespaces.
2
Creates a Cmd struct to provide shared dependencies and start components.
5.3.2.2.3. About multi-group APIs

Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:

$ operator-sdk edit --multigroup=true

This command updates the PROJECT file, which should look like the following example:

domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...

For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly.

Additional resource

5.3.2.3. Creating an API and controller

Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.

Procedure

  1. Run the following command to create an API with group cache, version, v1, and kind Memcached:

    $ operator-sdk create api \
        --group=cache \
        --version=v1 \
        --kind=Memcached
  2. When prompted, enter y for creating both the resource and controller:

    Create Resource [y/n]
    y
    Create Controller [y/n]
    y

    Example output

    Writing scaffold for you to edit...
    api/v1/memcached_types.go
    controllers/memcached_controller.go
    ...

This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go.

5.3.2.3.1. Defining the API

Define the API for the Memcached custom resource (CR).

Procedure

  1. Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status:

    // MemcachedSpec defines the desired state of Memcached
    type MemcachedSpec struct {
    	// +kubebuilder:validation:Minimum=0
    	// Size is the size of the memcached deployment
    	Size int32 `json:"size"`
    }
    
    // MemcachedStatus defines the observed state of Memcached
    type MemcachedStatus struct {
    	// Nodes are the names of the memcached pods
    	Nodes []string `json:"nodes"`
    }
  2. Update the generated code for the resource type:

    $ make generate
    Tip

    After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type.

    The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement.

5.3.2.3.2. Generating CRD manifests

After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.

Procedure

  • Run the following command to generate and update CRD manifests:

    $ make manifests

    This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file.

5.3.2.3.2.1. About OpenAPI validation

OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.

Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix.

Additional resources

5.3.2.4. Implementing the controller

After creating a new API and controller, you can implement the controller logic.

Procedure

  • For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation:

    Example 5.1. Example memcached_controller.go

    /*
    Copyright 2020.
    
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
    */
    
    package controllers
    
    import (
            appsv1 "k8s.io/api/apps/v1"
            corev1 "k8s.io/api/core/v1"
            "k8s.io/apimachinery/pkg/api/errors"
            metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
            "k8s.io/apimachinery/pkg/types"
            "reflect"
    
            "context"
    
            "github.com/go-logr/logr"
            "k8s.io/apimachinery/pkg/runtime"
            ctrl "sigs.k8s.io/controller-runtime"
            "sigs.k8s.io/controller-runtime/pkg/client"
            ctrllog "sigs.k8s.io/controller-runtime/pkg/log"
    
            cachev1 "github.com/example-inc/memcached-operator/api/v1"
    )
    
    // MemcachedReconciler reconciles a Memcached object
    type MemcachedReconciler struct {
            client.Client
            Log    logr.Logger
            Scheme *runtime.Scheme
    }
    
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
    // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
    
    // Reconcile is part of the main kubernetes reconciliation loop which aims to
    // move the current state of the cluster closer to the desired state.
    // TODO(user): Modify the Reconcile function to compare the state specified by
    // the Memcached object against the actual cluster state, and then
    // perform operations to make the cluster state reflect the state specified by
    // the user.
    //
    // For more details, check Reconcile and its Result here:
    // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile
    func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
            //log := r.Log.WithValues("memcached", req.NamespacedName)
            log := ctrllog.FromContext(ctx)
            // Fetch the Memcached instance
            memcached := &cachev1.Memcached{}
            err := r.Get(ctx, req.NamespacedName, memcached)
            if err != nil {
                    if errors.IsNotFound(err) {
                            // Request object not found, could have been deleted after reconcile request.
                            // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
                            // Return and don't requeue
                            log.Info("Memcached resource not found. Ignoring since object must be deleted")
                            return ctrl.Result{}, nil
                    }
                    // Error reading the object - requeue the request.
                    log.Error(err, "Failed to get Memcached")
                    return ctrl.Result{}, err
            }
    
            // Check if the deployment already exists, if not create a new one
            found := &appsv1.Deployment{}
            err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
            if err != nil && errors.IsNotFound(err) {
                    // Define a new deployment
                    dep := r.deploymentForMemcached(memcached)
                    log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
                    err = r.Create(ctx, dep)
                    if err != nil {
                            log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
                            return ctrl.Result{}, err
                    }
                    // Deployment created successfully - return and requeue
                    return ctrl.Result{Requeue: true}, nil
            } else if err != nil {
                    log.Error(err, "Failed to get Deployment")
                    return ctrl.Result{}, err
            }
    
            // Ensure the deployment size is the same as the spec
            size := memcached.Spec.Size
            if *found.Spec.Replicas != size {
                    found.Spec.Replicas = &size
                    err = r.Update(ctx, found)
                    if err != nil {
                            log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
                            return ctrl.Result{}, err
                    }
                    // Spec updated - return and requeue
                    return ctrl.Result{Requeue: true}, nil
            }
    
            // Update the Memcached status with the pod names
            // List the pods for this memcached's deployment
            podList := &corev1.PodList{}
            listOpts := []client.ListOption{
                    client.InNamespace(memcached.Namespace),
                    client.MatchingLabels(labelsForMemcached(memcached.Name)),
            }
            if err = r.List(ctx, podList, listOpts...); err != nil {
                    log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
                    return ctrl.Result{}, err
            }
            podNames := getPodNames(podList.Items)
    
            // Update status.Nodes if needed
            if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
                    memcached.Status.Nodes = podNames
                    err := r.Status().Update(ctx, memcached)
                    if err != nil {
                            log.Error(err, "Failed to update Memcached status")
                            return ctrl.Result{}, err
                    }
            }
    
            return ctrl.Result{}, nil
    }
    
    // deploymentForMemcached returns a memcached Deployment object
    func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment {
            ls := labelsForMemcached(m.Name)
            replicas := m.Spec.Size
    
            dep := &appsv1.Deployment{
                    ObjectMeta: metav1.ObjectMeta{
                            Name:      m.Name,
                            Namespace: m.Namespace,
                    },
                    Spec: appsv1.DeploymentSpec{
                            Replicas: &replicas,
                            Selector: &metav1.LabelSelector{
                                    MatchLabels: ls,
                            },
                            Template: corev1.PodTemplateSpec{
                                    ObjectMeta: metav1.ObjectMeta{
                                            Labels: ls,
                                    },
                                    Spec: corev1.PodSpec{
                                            Containers: []corev1.Container{{
                                                    Image:   "memcached:1.4.36-alpine",
                                                    Name:    "memcached",
                                                    Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
                                                    Ports: []corev1.ContainerPort{{
                                                            ContainerPort: 11211,
                                                            Name:          "memcached",
                                                    }},
                                            }},
                                    },
                            },
                    },
            }
            // Set Memcached instance as the owner and controller
            ctrl.SetControllerReference(m, dep, r.Scheme)
            return dep
    }
    
    // labelsForMemcached returns the labels for selecting the resources
    // belonging to the given memcached CR name.
    func labelsForMemcached(name string) map[string]string {
            return map[string]string{"app": "memcached", "memcached_cr": name}
    }
    
    // getPodNames returns the pod names of the array of pods passed in
    func getPodNames(pods []corev1.Pod) []string {
            var podNames []string
            for _, pod := range pods {
                    podNames = append(podNames, pod.Name)
            }
            return podNames
    }
    
    // SetupWithManager sets up the controller with the Manager.
    func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
            return ctrl.NewControllerManagedBy(mgr).
                    For(&cachev1.Memcached{}).
                    Owns(&appsv1.Deployment{}).
                    Complete(r)
    }

    The example controller runs the following reconciliation logic for each Memcached custom resource (CR):

    • Create a Memcached deployment if it does not exist.
    • Ensure that the deployment size is the same as specified by the Memcached CR spec.
    • Update the Memcached CR status with the names of the memcached pods.

The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.

5.3.2.4.1. Resources watched by the controller

The SetupWithManager() function in controllers/memcached_controller.go specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.

import (
	...
	appsv1 "k8s.io/api/apps/v1"
	...
)

func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&cachev1.Memcached{}).
		Owns(&appsv1.Deployment{}).
		Complete(r)
}

NewControllerManagedBy() provides a controller builder that allows various controller configurations.

For(&cachev1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached type, the reconcile loop is sent a reconcile Request argument, which consists of a namespace and name key, for that Memcached object.

Owns(&appsv1.Deployment{}) specifies the Deployment type as the secondary resource to watch. For each Deployment type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached object for which the deployment was created.

5.3.2.4.2. Controller configurations

You can initialize a controller by using many other useful configurations. For example:

  • Set the maximum number of concurrent reconciles for the controller by using the MaxConcurrentReconciles option, which defaults to 1:

    func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
        return ctrl.NewControllerManagedBy(mgr).
            For(&cachev1.Memcached{}).
            Owns(&appsv1.Deployment{}).
            WithOptions(controller.Options{
                MaxConcurrentReconciles: 2,
            }).
            Complete(r)
    }
  • Filter watch events using predicates.
  • Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the EnqueueRequestsFromMapFunc handler to transform a watch event into an arbitrary set of reconcile requests.

For more details on these and other configurations, see the upstream Builder and Controller GoDocs.

5.3.2.4.3. Reconcile loop

Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument, which is a namespace and name key used to find the primary resource object, Memcached, from the cache:

import (
	ctrl "sigs.k8s.io/controller-runtime"

	cachev1 "github.com/example-inc/memcached-operator/api/v1"
	...
)

func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  // Lookup the Memcached instance for this reconcile request
  memcached := &cachev1.Memcached{}
  err := r.Get(ctx, req.NamespacedName, memcached)
  ...
}

Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:

// Reconcile successful - don't requeue
return ctrl.Result{}, nil
// Reconcile failed due to error - requeue
return ctrl.Result{}, err
// Requeue for any reason other than an error
return ctrl.Result{Requeue: true}, nil

You can set the Result.RequeueAfter to requeue the request after a grace period as well:

import "time"

// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
Note

You can return Result with RequeueAfter set to periodically reconcile a CR.

For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.

5.3.2.4.4. Permissions and RBAC manifests

The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:

// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;

func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
  ...
}

The ClusterRole object manifest at config/rbac/role.yaml is generated from the previous markers by using the controller-gen utility whenever the make manifests command is run.

5.3.2.5. Enabling proxy support

Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Edit the controllers/memcached_controller.go file to include the following:

    1. Import the proxy package from the operator-lib library:

      import (
        ...
         "github.com/operator-framework/operator-lib/proxy"
      )
    2. Add the proxy.ReadProxyVarsFromEnv helper function to the reconcile loop and append the results to the Operand environments:

      for i, container := range dep.Spec.Template.Spec.Containers {
      		dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...)
      }
      ...
  2. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"

5.3.2.6. Running the Operator

There are three ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.
  • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Note

Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.

5.3.2.6.1. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure

  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator locally:

    $ make install run

    Example output

    ...
    2021-01-10T21:09:29.016-0700	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
    2021-01-10T21:09:29.017-0700	INFO	setup	starting manager
    2021-01-10T21:09:29.017-0700	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
    2021-01-10T21:09:29.018-0700	INFO	controller-runtime.manager.controller.memcached	Starting EventSource	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="}
    2021-01-10T21:09:29.218-0700	INFO	controller-runtime.manager.controller.memcached	Starting Controller	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached"}
    2021-01-10T21:09:29.218-0700	INFO	controller-runtime.manager.controller.memcached	Starting workers	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}

5.3.2.6.2. Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Prerequisites

  • Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.3.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.3.2.6.3.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4.16+ installed
  • Operator project initialized by using the Operator SDK
  • If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.3.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions
  • If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of OpenShift Container Platform 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.

5.3.2.7. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Memcached Operator, which provides the Memcached CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project memcached-operator-system
  2. Edit the sample Memcached CR manifest at config/samples/cache_v1_memcached.yaml to contain the following specification:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    ...
    spec:
    ...
      size: 3
  3. Create the CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    memcached-operator-controller-manager   1/1     1            1           8m
    memcached-sample                        3/3     3            3           1m

  5. Check the pods and CR status to confirm the status is updated with the Memcached pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          1m

    2. Check the CR status:

      $ oc get memcached/memcached-sample -o yaml

      Example output

      apiVersion: cache.example.com/v1
      kind: Memcached
      metadata:
      ...
        name: memcached-sample
      ...
      spec:
        size: 3
      status:
        nodes:
        - memcached-sample-6fd7c98d8-7dqdr
        - memcached-sample-6fd7c98d8-g5k7v
        - memcached-sample-6fd7c98d8-m7vn7

  6. Update the deployment size.

    1. Update config/samples/cache_v1_memcached.yaml file to change the spec.size field in the Memcached CR from 3 to 5:

      $ oc patch memcached memcached-sample \
          -p '{"spec":{"size": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator-controller-manager   1/1     1            1           10m
      memcached-sample                        5/5     5            5           3m

  7. Delete the CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml
  8. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>

5.3.2.8. Additional resources

5.3.3. Project layout for Go-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.3.3.1. Go-based project layout

Go-based Operator projects, the default type, generated using the operator-sdk init command contain the following files and directories:

File or directoryPurpose

main.go

Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the apis/ directory and starts all controllers in the controllers/ directory.

apis/

Directory tree that defines the APIs of the CRDs. You must edit the apis/<version>/<kind>_types.go files to define the API for each resource type and import these packages in your controllers to watch for these resource types.

controllers/

Controller implementations. Edit the controller/<kind>_controller.go files to define the reconcile logic of the controller for handling a resource type of the specified kind.

config/

Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates.

Makefile

Targets used to build and deploy your controller.

Dockerfile

Instructions used by a container engine to build your Operator.

manifests/

Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment.

5.3.4. Updating Go-based Operator projects for newer Operator SDK versions

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.

5.3.4.1. Updating Go-based Operator projects for Operator SDK 1.31.0

The following procedure updates an existing Go-based Operator project for compatibility with 1.31.0.

Prerequisites

  • Operator SDK 1.31.0 installed
  • An Operator project created or maintained with Operator SDK 1.28.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.31.0-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.31.0-ocp

  2. Find the ose-kube-rbac-proxy pull spec in the following files, and update the image tag to v4.16:

    • config/default/manager_auth_proxy_patch.yaml
    • bundle/manifests/memcached-operator.clusterserviceversion.yaml

    Example ose-kube-rbac-proxy pull spec with v4.16 image tag

    # ...
          containers:
          - name: kube-rbac-proxy
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16
    # ...

5.3.4.2. Additional resources

5.4. Ansible-based Operators

5.4.1. Getting started with Operator SDK for Ansible-based Operators

The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.

5.4.1.1. Prerequisites

5.4.1.2. Creating and deploying Ansible-based Operators

You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.

Procedure

  1. Create a project.

    1. Create your project directory:

      $ mkdir memcached-operator
    2. Change into the project directory:

      $ cd memcached-operator
    3. Run the operator-sdk init command with the ansible plugin to initialize the project:

      $ operator-sdk init \
          --plugins=ansible \
          --domain=example.com
  2. Create an API.

    Create a simple Memcached API:

    $ operator-sdk create api \
        --group cache \
        --version v1 \
        --kind Memcached \
        --generate-role 1
    1
    Generates an Ansible role for the API.
  3. Build and push the Operator image.

    Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to:

    $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
  4. Run the Operator.

    1. Install the CRD:

      $ make install
    2. Deploy the project to the cluster. Set IMG to the image that you pushed:

      $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  5. Create a sample custom resource (CR).

    1. Create a sample CR:

      $ oc apply -f config/samples/cache_v1_memcached.yaml \
          -n memcached-operator-system
    2. Watch for the CR to reconcile the Operator:

      $ oc logs deployment.apps/memcached-operator-controller-manager \
          -c manager \
          -n memcached-operator-system

      Example output

      ...
      I0205 17:48:45.881666       7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator
      {"level":"info","ts":1612547325.8819902,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
      {"level":"info","ts":1612547325.98242,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
      {"level":"info","ts":1612547325.9824686,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}
      {"level":"info","ts":1612547348.8311093,"logger":"runner","msg":"Ansible-runner exited successfully","job":"4037200794235010051","name":"memcached-sample","namespace":"memcached-operator-system"}

  6. Delete a CR.

    Delete a CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
  7. Clean up.

    Run the following command to clean up the resources that have been created as part of this procedure:

    $ make undeploy

5.4.1.3. Next steps

5.4.2. Operator SDK tutorial for Ansible-based Operators

Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:

  • Create a Memcached deployment
  • Ensure that the deployment size is the same as specified by the Memcached custom resource (CR) spec
  • Update the Memcached CR status using the status writer with the names of the memcached pods
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished by using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.

5.4.2.1. Prerequisites

5.4.2.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/memcached-operator
  2. Change to the directory:

    $ cd $HOME/projects/memcached-operator
  3. Run the operator-sdk init command with the ansible plugin to initialize the project:

    $ operator-sdk init \
        --plugins=ansible \
        --domain=example.com
5.4.2.2.1. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Ansible. For example:

domain: example.com
layout:
- ansible.sdk.operatorframework.io/v1
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
projectName: memcached-operator
version: "3"

5.4.2.3. Creating an API

Use the Operator SDK CLI to create a Memcached API.

Procedure

  • Run the following command to create an API with group cache, version, v1, and kind Memcached:

    $ operator-sdk create api \
        --group cache \
        --version v1 \
        --kind Memcached \
        --generate-role 1
    1
    Generates an Ansible role for the API.

After creating the API, your Operator project updates with the following structure:

Memcached CRD
Includes a sample Memcached resource
Manager

Program that reconciles the state of the cluster to the desired state by using:

  • A reconciler, either an Ansible role or playbook
  • A watches.yaml file, which connects the Memcached resource to the memcached Ansible role

5.4.2.4. Modifying the manager

Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a Memcached resource is created, updated, or deleted.

Procedure

  1. Update the roles/memcached/tasks/main.yml file with the following structure:

    ---
    - name: start memcached
      k8s:
        definition:
          kind: Deployment
          apiVersion: apps/v1
          metadata:
            name: '{{ ansible_operator_meta.name }}-memcached'
            namespace: '{{ ansible_operator_meta.namespace }}'
          spec:
            replicas: "{{size}}"
            selector:
              matchLabels:
                app: memcached
            template:
              metadata:
                labels:
                  app: memcached
              spec:
                containers:
                - name: memcached
                  command:
                  - memcached
                  - -m=64
                  - -o
                  - modern
                  - -v
                  image: "docker.io/memcached:1.4.36-alpine"
                  ports:
                    - containerPort: 11211

    This memcached role ensures a memcached deployment exist and sets the deployment size.

  2. Set default values for variables used in your Ansible role by editing the roles/memcached/defaults/main.yml file:

    ---
    # defaults file for Memcached
    size: 1
  3. Update the Memcached sample resource in the config/samples/cache_v1_memcached.yaml file with the following structure:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      labels:
        app.kubernetes.io/name: memcached
        app.kubernetes.io/instance: memcached-sample
        app.kubernetes.io/part-of: memcached-operator
        app.kubernetes.io/managed-by: kustomize
        app.kubernetes.io/created-by: memcached-operator
      name: memcached-sample
    spec:
      size: 3

    The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.

Note

The names of all variables in the spec field are converted to snake case, meaning lowercase with an underscore, by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

You can disable this case conversion by setting the snakeCaseParameters option to false in your watches.yaml file. It is recommended that you perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

5.4.2.5. Enabling proxy support

Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Add the environment variables to the deployment by updating the roles/memcached/tasks/main.yml file with the following:

    ...
    env:
       - name: HTTP_PROXY
         value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}'
       - name: http_proxy
         value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}'
    ...
  2. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"

5.4.2.6. Running the Operator

There are three ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.
  • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.4.2.6.1. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure

  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator locally:

    $ make install run

    Example output

    ...
    {"level":"info","ts":1612589622.7888272,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"}
    {"level":"info","ts":1612589622.7897573,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
    {"level":"info","ts":1612589622.789971,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1612589622.7899997,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
    {"level":"info","ts":1612589622.8904517,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612589622.8905244,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}

5.4.2.6.2. Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.4.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.4.2.6.3.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4.16+ installed
  • Operator project initialized by using the Operator SDK

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.4.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of OpenShift Container Platform 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.

5.4.2.7. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Memcached Operator, which provides the Memcached CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project memcached-operator-system
  2. Edit the sample Memcached CR manifest at config/samples/cache_v1_memcached.yaml to contain the following specification:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    ...
    spec:
    ...
      size: 3
  3. Create the CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    memcached-operator-controller-manager   1/1     1            1           8m
    memcached-sample                        3/3     3            3           1m

  5. Check the pods and CR status to confirm the status is updated with the Memcached pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          1m

    2. Check the CR status:

      $ oc get memcached/memcached-sample -o yaml

      Example output

      apiVersion: cache.example.com/v1
      kind: Memcached
      metadata:
      ...
        name: memcached-sample
      ...
      spec:
        size: 3
      status:
        nodes:
        - memcached-sample-6fd7c98d8-7dqdr
        - memcached-sample-6fd7c98d8-g5k7v
        - memcached-sample-6fd7c98d8-m7vn7

  6. Update the deployment size.

    1. Update config/samples/cache_v1_memcached.yaml file to change the spec.size field in the Memcached CR from 3 to 5:

      $ oc patch memcached memcached-sample \
          -p '{"spec":{"size": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator-controller-manager   1/1     1            1           10m
      memcached-sample                        5/5     5            5           3m

  7. Delete the CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml
  8. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>

5.4.2.8. Additional resources

5.4.3. Project layout for Ansible-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.3.1. Ansible-based project layout

Ansible-based Operator projects generated using the operator-sdk init --plugins ansible command contain the following directories and files:

File or directoryPurpose

Dockerfile

Dockerfile for building the container image for the Operator.

Makefile

Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD).

PROJECT

YAML file containing metadata information for the Operator.

config/crd

Base CRD files and the kustomization.yaml file settings.

config/default

Collects all Operator manifests for deployment. Use by the make deploy command.

config/manager

Controller manager deployment.

config/prometheus

ServiceMonitor resource for monitoring the Operator.

config/rbac

Role and role binding for leader election and authentication proxy.

config/samples

Sample resources created for the CRDs.

config/testing

Sample configurations for testing.

playbooks/

A subdirectory for the playbooks to run.

roles/

Subdirectory for the roles tree to run.

watches.yaml

Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the create api command.

requirements.yml

YAML file containing the Ansible collections and role dependencies to install during a build.

molecule/

Molecule scenarios for end-to-end testing of your role and Operator.

5.4.4. Updating projects for newer Operator SDK versions

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.

5.4.4.1. Updating Ansible-based Operator projects for Operator SDK 1.31.0

The following procedure updates an existing Ansible-based Operator project for compatibility with 1.31.0.

Prerequisites

  • Operator SDK 1.31.0 installed
  • An Operator project created or maintained with Operator SDK 1.28.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.31.0-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.31.0-ocp

  2. Find the ose-kube-rbac-proxy pull spec in the following files, and update the image tag to v4.16:

    • config/default/manager_auth_proxy_patch.yaml
    • bundle/manifests/memcached-operator.clusterserviceversion.yaml

    Example ose-kube-rbac-proxy pull spec with v4.16 image tag

    # ...
          containers:
          - name: kube-rbac-proxy
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16
    # ...

  3. Make the following changes to your Operator’s Dockerfile:

    1. Replace the ansible-operator-2.11-preview base image with the Ansible Operator base image and update the version to v4.16, as shown in the following example:

      Example Dockerfile

      FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.16

    2. The update to Ansible 2.15.0 in version 1.30.0 of the Ansible Operator removed the following preinstalled Python modules:

      • ipaddress
      • openshift
      • jmespath
      • cryptography
      • oauthlib

      If your Operator depends on one of these removed Python modules, update your Dockerfile to install the required modules by running the pip install command.

  4. Update your requirements.yaml and requirements.go files to remove the community.kubernetes collection and update the operator_sdk.util collection to version 0.5.0, as shown in the following example:

    Example requirements.yaml file

      collections:
    -  - name: community.kubernetes 1
    -    version: "2.0.1"
       - name: operator_sdk.util
    -    version: "0.4.0"
    +    version: "0.5.0" 2
       - name: kubernetes.core
         version: "2.4.0"
       - name: cloud.common

    1
    Remove the community.kubernetes collection
    2
    Update the operator_sdk.util collection to version 0.5.0.
  5. Remove all instances of the lint field from your molecule/kind/molecule.yml and molecule/default/molecule.yml files, as shown in the following example:

      ---
      dependency:
        name: galaxy
      driver:
        name: delegated
    -   lint: |
    -     set -e
    -     yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" .
      platforms:
        - name: cluster
          groups:
    	- k8s
      provisioner:
        name: ansible
    -     lint: |
    -       set -e
          ansible-lint
        inventory:
          group_vars:
    	all:
    	  namespace: ${TEST_OPERATOR_NAMESPACE:-osdk-test}
          host_vars:
    	localhost:
    	  ansible_python_interpreter: '{{ ansible_playbook_python }}'
    	  config_dir: ${MOLECULE_PROJECT_DIRECTORY}/config
    	  samples_dir: ${MOLECULE_PROJECT_DIRECTORY}/config/samples
    	  operator_image: ${OPERATOR_IMAGE:-""}
    	  operator_pull_policy: ${OPERATOR_PULL_POLICY:-"Always"}
    	  kustomize: ${KUSTOMIZE_PATH:-kustomize}
        env:
          K8S_AUTH_KUBECONFIG: ${KUBECONFIG:-"~/.kube/config"}
      verifier:
        name: ansible
    -     lint: |
    -       set -e
    -      ansible-lint

5.4.4.2. Additional resources

5.4.5. Ansible support in Operator SDK

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.5.1. Custom resource files

Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 5.1. Custom resource fields
FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 5.2. Ansible-based Operator annotations
AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

5.4.5.2. watches.yaml file

A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml.

Table 5.3. watches.yaml file mappings
FieldDescription

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example watches.yaml file

- version: v1alpha1 1
  group: test1.example.com
  kind: Test1
  role: /opt/ansible/roles/Test1

- version: v1alpha1 2
  group: test2.example.com
  kind: Test2
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 3
  group: test3.example.com
  kind: Test3
  playbook: /opt/ansible/test3.yml
  reconcilePeriod: 0
  manageStatus: false

1
Simple example mapping Test1 to the test1 role.
2
Simple example mapping Test2 to a playbook.
3
More complex example for the Test3 kind. Disables re-queuing and managing the CR status in the playbook.
5.4.5.2.1. Advanced options

Advanced features can be enabled by adding them to your watches.yaml file per GVK. They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.

Table 5.4. Advanced watches.yaml file options
FeatureYAML keyDescriptionAnnotation for overrideDefault value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansible.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR status section.

 

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

 

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

 

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example watches.yml file with advanced options

- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

5.4.5.3. Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message: "Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

---
- debug:
    msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"

5.4.5.4. Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

5.4.6. Kubernetes Collection for Ansible

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.

5.4.6.1. Installing the Kubernetes Collection for Ansible

You can install the Kubernetes Collection for Ansible on your local workstation.

Procedure

  1. Install Ansible 2.15+:

    $ sudo dnf install ansible
  2. Install the Python Kubernetes client package:

    $ pip install kubernetes
  3. Install the Kubernetes Collection using one of the following methods:

    • You can install the collection directly from Ansible Galaxy:

      $ ansible-galaxy collection install community.kubernetes
    • If you have already initialized your Operator, you might have a requirements.yml file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs the community.kubernetes collection as well as the operator_sdk.util collection, which provides modules and plugins for Operator-specific functions.

      To install the dependent modules from the requirements.yml file:

      $ ansible-galaxy collection install -r requirements.yml

5.4.6.2. Testing the Kubernetes Collection locally

Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Prerequisites

  • Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
  • Install the Kubernetes Collection for Ansible

Procedure

  1. In your Ansible-based Operator project directory, modify the roles/<kind>/tasks/main.yml file with the Ansible logic that you want. The roles/<kind>/ directory is created when you use the --generate-role flag while creating an API. The <kind> replaceable matches the kind that you specified for the API.

    The following example creates and deletes a config map based on the value of a variable named state:

    ---
    - name: set ConfigMap example-config to {{ state }}
      community.kubernetes.k8s:
        api_version: v1
        kind: ConfigMap
        name: example-config
        namespace: <operator_namespace> 1
        state: "{{ state }}"
      ignore_errors: true 2
    1
    Specify the namespace where you want the config map created.
    2
    Setting ignore_errors: true ensures that deleting a nonexistent config map does not fail.
  2. Modify the roles/<kind>/defaults/main.yml file to set state to present by default:

    ---
    state: present
  3. Create an Ansible playbook by creating a playbook.yml file in the top-level of your project directory, and include your <kind> role:

    ---
    - hosts: localhost
      roles:
        - <kind>
  4. Run the playbook:

    $ ansible-playbook playbook.yml

    Example output

    [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ********************************************************************************
    
    TASK [Gathering Facts] ********************************************************************************
    ok: [localhost]
    
    TASK [memcached : set ConfigMap example-config to present] ********************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  5. Verify that the config map was created:

    $ oc get configmaps

    Example output

    NAME               DATA   AGE
    example-config     0      2m1s

  6. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent

    Example output

    [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ********************************************************************************
    
    TASK [Gathering Facts] ********************************************************************************
    ok: [localhost]
    
    TASK [memcached : set ConfigMap example-config to absent] ********************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  7. Verify that the config map was deleted:

    $ oc get configmaps

5.4.6.3. Next steps

5.4.7. Using Ansible inside an Operator

After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml file.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.7.1. Custom resource files

Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 5.5. Custom resource fields
FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 5.6. Ansible-based Operator annotations
AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

5.4.7.2. Testing an Ansible-based Operator locally

You can test the logic inside of an Ansible-based Operator running locally by using the make run command from the top-level directory of your Operator project. The make run Makefile target runs the ansible-operator binary locally, which reads from the watches.yaml file and uses your ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s modules do.

Note

You can customize the roles path by setting the environment variable ANSIBLE_ROLES_PATH or by using the ansible-roles-path flag. If the role is not found in the ANSIBLE_ROLES_PATH value, the Operator looks for it in {{current directory}}/roles.

Prerequisites

Procedure

  1. Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):

    $ make install

    Example output

    /usr/bin/kustomize build config/crd | kubectl apply -f -
    customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created

  2. Run the make run command:

    $ make run

    Example output

    /home/user/memcached-operator/bin/ansible-operator run
    {"level":"info","ts":1612739145.2871568,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"}
    ...
    {"level":"info","ts":1612739148.347306,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":1612739148.3488882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2}
    {"level":"info","ts":1612739148.3490262,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
    {"level":"info","ts":1612739148.3490646,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"}
    {"level":"info","ts":1612739148.350217,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
    {"level":"info","ts":1612739148.3506632,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1612739148.350784,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
    {"level":"info","ts":1612739148.5511978,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612739148.5512562,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}

    With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.

    Note

    Consider an example config/samples/<gvk>.yaml CR manifest:

    apiVersion: <group>.example.com/v1alpha1
    kind: <kind>
    metadata:
      name: "<kind>-sample"

    Because the spec field is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.

  3. Create an instance of your CR with the default variable state set to present:

    $ oc apply -f config/samples/<gvk>.yaml
  4. Check that the example-config config map was created:

    $ oc get configmaps

    Example output

    NAME                    STATUS    AGE
    example-config          Active    3s

  5. Modify your config/samples/<gvk>.yaml file to set the state field to absent. For example:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    spec:
      state: absent
  6. Apply the changes:

    $ oc apply -f config/samples/<gvk>.yaml
  7. Confirm that the config map is deleted:

    $ oc get configmap

5.4.7.3. Testing an Ansible-based Operator on the cluster

After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is preferred for production use.

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.4.7.4. Ansible logs

Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.

5.4.7.4.1. Viewing Ansible logs

Prerequisites

  • Ansible-based Operator running as a deployment on a cluster

Procedure

  • To view logs from an Ansible-based Operator, run the following command:

    $ oc logs deployment/<project_name>-controller-manager \
        -c manager \1
        -n <namespace> 2
    1
    View logs from the manager container.
    2
    If you used the make deploy command to run the Operator as a deployment, use the <project_name>-system namespace.

    Example output

    {"level":"info","ts":1612732105.0579333,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"}
    {"level":"info","ts":1612732105.0587437,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
    I0207 21:08:26.110949       7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s
    {"level":"info","ts":1612732107.768025,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
    {"level":"info","ts":1612732107.768796,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2}
    {"level":"info","ts":1612732107.7688773,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
    {"level":"info","ts":1612732107.7688901,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"}
    {"level":"info","ts":1612732107.770032,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
    I0207 21:08:27.770185       7 leaderelection.go:243] attempting to acquire leader lease  memcached-operator-system/memcached-operator...
    {"level":"info","ts":1612732107.770202,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    I0207 21:08:27.784854       7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator
    {"level":"info","ts":1612732107.7850506,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"}
    {"level":"info","ts":1612732107.8853772,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612732107.8854098,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}

5.4.7.4.2. Enabling full Ansible results in logs

You can set the environment variable ANSIBLE_DEBUG_LOGS to True to enable checking the full Ansible result in logs, which can be helpful when debugging.

Procedure

  • Edit the config/manager/manager.yaml and config/default/manager_auth_proxy_patch.yaml files to include the following configuration:

          containers:
          - name: manager
            env:
            - name: ANSIBLE_DEBUG_LOGS
              value: "True"
5.4.7.4.3. Enabling verbose debugging in logs

While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.

Procedure

  • Add the ansible.sdk.operatorframework.io/verbosity annotation to your custom resource to enable the verbosity level that you want. For example:

    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "example-memcached"
      annotations:
        "ansible.sdk.operatorframework.io/verbosity": "4"
    spec:
      size: 4

5.4.8. Custom resource status management

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.4.8.1. About custom resource status in Ansible-based Operators

Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
  - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

5.4.8.2. Tracking custom resource status manually

You can use the operator_sdk.util collection to modify your Ansible-based Operator to track custom resource (CR) status manually from your application.

Prerequisites

  • Ansible-based Operator project created by using the Operator SDK

Procedure

  1. Update the watches.yaml file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: <kind>
      role: <role>
      manageStatus: false
  2. Use the operator_sdk.util.k8s_status Ansible module to update the subresource. For example, to update with key test and value data, operator_sdk.util can be used as shown:

    - operator_sdk.util.k8s_status:
        api_version: app.example.com/v1
        kind: <kind>
        name: "{{ ansible_operator_meta.name }}"
        namespace: "{{ ansible_operator_meta.namespace }}"
        status:
          test: data
  3. You can declare collections in the meta/main.yml file for the role, which is included for scaffolded Ansible-based Operators:

    collections:
      - operator_sdk.util
  4. After declaring collections in the role meta, you can invoke the k8s_status module directly:

    k8s_status:
      ...
      status:
        key1: value1

5.5. Helm-based Operators

5.5.1. Getting started with Operator SDK for Helm-based Operators

The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.

5.5.1.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.5.1.2. Creating and deploying Helm-based Operators

You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.

Procedure

  1. Create a project.

    1. Create your project directory:

      $ mkdir nginx-operator
    2. Change into the project directory:

      $ cd nginx-operator
    3. Run the operator-sdk init command with the helm plugin to initialize the project:

      $ operator-sdk init \
          --plugins=helm
  2. Create an API.

    Create a simple Nginx API:

    $ operator-sdk create api \
        --group demo \
        --version v1 \
        --kind Nginx

    This API uses the built-in Helm chart boilerplate from the helm create command.

  3. Build and push the Operator image.

    Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to:

    $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
  4. Run the Operator.

    1. Install the CRD:

      $ make install
    2. Deploy the project to the cluster. Set IMG to the image that you pushed:

      $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  5. Add a security context constraint (SCC).

    The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the nginx-sample pod:

    $ oc adm policy add-scc-to-user \
        anyuid system:serviceaccount:nginx-operator-system:nginx-sample
  6. Create a sample custom resource (CR).

    1. Create a sample CR:

      $ oc apply -f config/samples/demo_v1_nginx.yaml \
          -n nginx-operator-system
    2. Watch for the CR to reconcile the Operator:

      $ oc logs deployment.apps/nginx-operator-controller-manager \
          -c manager \
          -n nginx-operator-system
  7. Delete a CR.

    Delete a CR by running the following command:

    $ oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system
  8. Clean up.

    Run the following command to clean up the resources that have been created as part of this procedure:

    $ make undeploy

5.5.1.3. Next steps

5.5.2. Operator SDK tutorial for Helm-based Operators

Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:

  • Create a Nginx deployment
  • Ensure that the deployment size is the same as specified by the Nginx custom resource (CR) spec
  • Update the Nginx CR status using the status writer with the names of the nginx pods
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and controller-runtime library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.

5.5.2.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.5.2.2. Creating a project

Use the Operator SDK CLI to create a project called nginx-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/nginx-operator
  2. Change to the directory:

    $ cd $HOME/projects/nginx-operator
  3. Run the operator-sdk init command with the helm plugin to initialize the project:

    $ operator-sdk init \
        --plugins=helm \
        --domain=example.com \
        --group=demo \
        --version=v1 \
        --kind=Nginx
    Note

    By default, the helm plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the --helm-chart flag, to initialize a project using an existing Helm chart.

    The init command creates the nginx-operator project specifically for watching a resource with API version example.com/v1 and kind Nginx.

  4. For Helm-based projects, the init command generates the RBAC rules in the config/rbac/role.yaml file based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.
5.5.2.2.1. Existing Helm charts

Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:

  • --helm-chart
  • --helm-chart-repo
  • --helm-chart-version

If the --helm-chart flag is specified, the --group, --version, and --kind flags become optional. If left unset, the following default values are used:

FlagValue

--domain

my.domain

--group

charts

--version

v1

--kind

Deduced from the specified chart

If the --helm-chart flag specifies a local chart archive, for example example-chart-1.2.0.tgz, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.

If a custom repository URL is not specified by the --helm-chart-repo flag, the following chart reference formats are supported:

FormatDescription

<repo_name>/<chart_name>

Fetch the Helm chart named <chart_name> from the helm chart repository named <repo_name>, as specified in the $HELM_HOME/repositories/repositories.yaml file. Use the helm repo add command to configure this file.

<url>

Fetch the Helm chart archive at the specified URL.

If a custom repository URL is specified by --helm-chart-repo, the following chart reference format is supported:

FormatDescription

<chart_name>

Fetch the Helm chart named <chart_name> in the Helm chart repository specified by the --helm-chart-repo URL value.

If the --helm-chart-version flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version flag is not used when the chart specified with the --helm-chart flag refers to a specific version, for example when it is a local path or a URL.

For more details and examples, run:

$ operator-sdk init --plugins helm --help
5.5.2.2.2. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Helm. For example:

domain: example.com
layout:
- helm.sdk.operatorframework.io/v1
plugins:
  manifests.sdk.operatorframework.io/v2: {}
  scorecard.sdk.operatorframework.io/v2: {}
  sdk.x-openshift.io/v1: {}
projectName: nginx-operator
resources:
- api:
    crdVersion: v1
    namespaced: true
  domain: example.com
  group: demo
  kind: Nginx
  version: v1
version: "3"

5.5.2.3. Understanding the Operator logic

For this example, the nginx-operator project executes the following reconciliation logic for each Nginx custom resource (CR):

  • Create an Nginx deployment if it does not exist.
  • Create an Nginx service if it does not exist.
  • Create an Nginx ingress if it is enabled and does not exist.
  • Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the Nginx CR, for example the replica count, image, and service type.

By default, the nginx-operator project watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

# Use the 'create api' subcommand to add watches to this file.
- group: demo
  version: v1
  kind: Nginx
  chart: helm-charts/nginx
# +kubebuilder:scaffold:watch
5.5.2.3.1. Sample Helm chart

When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.

For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

If you are not already familiar with Helm charts, review the Helm developer documentation.

5.5.2.3.2. Modifying the custom resource spec

Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.

You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.

Procedure

  1. The helm-charts/nginx/values.yaml file has a value called replicaCount set to 1 by default. To have two Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

    Edit the config/samples/demo_v1_nginx.yaml file to set replicaCount: 2:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 2
  2. Similarly, the default service port is set to 80. To use 8080, edit the config/samples/demo_v1_nginx.yaml file to set spec.port: 8080,which adds the service port override:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    spec:
      replicaCount: 2
      service:
        port: 8080

The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command.

5.5.2.4. Enabling proxy support

Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY
Note

This tutorial uses HTTP_PROXY as an example environment variable.

Prerequisites

  • A cluster with cluster-wide egress proxy enabled.

Procedure

  1. Edit the watches.yaml file to include overrides based on an environment variable by adding the overrideValues field:

    ...
    - group: demo.example.com
      version: v1alpha1
      kind: Nginx
      chart: helm-charts/nginx
      overrideValues:
        proxy.http: $HTTP_PROXY
    ...
  2. Add the proxy.http value in the helm-charts/nginx/values.yaml file:

    ...
    proxy:
      http: ""
      https: ""
      no_proxy: ""
  3. To make sure the chart template supports using the variables, edit the chart template in the helm-charts/nginx/templates/deployment.yaml file to contain the following:

    containers:
      - name: {{ .Chart.Name }}
        securityContext:
          - toYaml {{ .Values.securityContext | nindent 12 }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
          - name: http_proxy
            value: "{{ .Values.proxy.http }}"
  4. Set the environment variable on the Operator deployment by adding the following to the config/manager/manager.yaml file:

    containers:
     - args:
       - --leader-elect
       - --leader-election-id=ansible-proxy-demo
       image: controller:latest
       name: manager
       env:
         - name: "HTTP_PROXY"
           value: "http_proxy_test"

5.5.2.5. Running the Operator

There are three ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.
  • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure

  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator locally:

    $ make install run

    Example output

    ...
    {"level":"info","ts":1612652419.9289865,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":1612652419.9296563,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.example.com/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"}
    {"level":"info","ts":1612652419.929983,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1612652419.930015,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting EventSource","source":"kind source: demo.example.com/v1, Kind=Nginx"}
    {"level":"info","ts":1612652420.2307851,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8}

5.5.2.5.2. Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.5.2.5.3.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4.16+ installed
  • Operator project initialized by using the Operator SDK

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of OpenShift Container Platform 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.

5.5.2.6. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Nginx Operator, which provides the Nginx CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project nginx-operator-system
  2. Edit the sample Nginx CR manifest at config/samples/demo_v1_nginx.yaml to contain the following specification:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 3
  3. The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the nginx-sample pod:

    $ oc adm policy add-scc-to-user \
        anyuid system:serviceaccount:nginx-operator-system:nginx-sample
  4. Create the CR:

    $ oc apply -f config/samples/demo_v1_nginx.yaml
  5. Ensure that the Nginx Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-operator-controller-manager       1/1     1            1           8m
    nginx-sample                            3/3     3            3           1m

  6. Check the pods and CR status to confirm the status is updated with the Nginx pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      nginx-sample-6fd7c98d8-7dqdr          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-g5k7v          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-m7vn7          1/1       Running   0          1m

    2. Check the CR status:

      $ oc get nginx/nginx-sample -o yaml

      Example output

      apiVersion: demo.example.com/v1
      kind: Nginx
      metadata:
      ...
        name: nginx-sample
      ...
      spec:
        replicaCount: 3
      status:
        nodes:
        - nginx-sample-6fd7c98d8-7dqdr
        - nginx-sample-6fd7c98d8-g5k7v
        - nginx-sample-6fd7c98d8-m7vn7

  7. Update the deployment size.

    1. Update config/samples/demo_v1_nginx.yaml file to change the spec.size field in the Nginx CR from 3 to 5:

      $ oc patch nginx nginx-sample \
          -p '{"spec":{"replicaCount": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      nginx-operator-controller-manager       1/1     1            1           10m
      nginx-sample                            5/5     5            5           3m

  8. Delete the CR by running the following command:

    $ oc delete -f config/samples/demo_v1_nginx.yaml
  9. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>

5.5.2.7. Additional resources

5.5.3. Project layout for Helm-based Operators

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.5.3.1. Helm-based project layout

Helm-based Operator projects generated using the operator-sdk init --plugins helm command contain the following directories and files:

File/foldersPurpose

config/

Kustomize manifests for deploying the Operator on a Kubernetes cluster.

helm-charts/

Helm chart initialized with the operator-sdk create api command.

Dockerfile

Used to build the Operator image with the make docker-build command.

watches.yaml

Group/version/kind (GVK) and Helm chart location.

Makefile

Targets used to manage the project.

PROJECT

YAML file containing metadata information for the Operator.

5.5.4. Updating Helm-based projects for newer Operator SDK versions

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.

5.5.4.1. Updating Helm-based Operator projects for Operator SDK 1.31.0

The following procedure updates an existing Helm-based Operator project for compatibility with 1.31.0.

Prerequisites

  • Operator SDK 1.31.0 installed
  • An Operator project created or maintained with Operator SDK 1.28.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.31.0-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.31.0-ocp

  2. Find the ose-kube-rbac-proxy pull spec in the following files, and update the image tag to v4.16:

    • config/default/manager_auth_proxy_patch.yaml
    • bundle/manifests/memcached-operator.clusterserviceversion.yaml

    Example ose-kube-rbac-proxy pull spec with v4.16 image tag

    # ...
          containers:
          - name: kube-rbac-proxy
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16
    # ...

  3. Edit your Operator’s Dockerfile to update the Helm Operator version to v4.16, as shown in the following example:

    Example Dockerfile

    FROM registry.redhat.io/openshift4/ose-helm-operator:v4.16

  4. If you use a custom service account for deployment, define the following role to require a watch operation on your secrets resource, as shown in the following example:

    Example config/rbac/role.yaml file

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: <operator_name>-admin
    subjects:
    - kind: ServiceAccount
      name: <operator_name>
      namespace: <operator_namespace>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: ""
    rules: 1
      - apiGroups:
          - ""
        resources:
          - secrets
        verbs:
          - watch

    1
    Add the rules stanza to create a watch operation for your secrets resource.

5.5.4.2. Additional resources

5.5.5. Helm support in Operator SDK

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.5.5.1. Helm charts

One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.

The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec field of the object is a list of configuration options that are typically described in the Helm values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.

For an example of a simple CR called Tomcat:

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

The replicaCount value, 2 in this case, is propagated into the template of the chart where the following is used:

{{ .Values.replicaCount }}

After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:

$ oc get Tomcats --all-namespaces

There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.

5.5.6. Operator SDK tutorial for Hybrid Helm Operators

The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the Operator maturity model.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

The Hybrid Helm Operator enhances the existing Helm-based support’s abilities through Go APIs. With this hybrid approach of Helm and Go, the Operator SDK enables Operator authors to use the following process:

  • Generate a default structure for, or scaffold, a Go API in the same project as Helm.
  • Configure the Helm reconciler in the main.go file of the project, through the libraries provided by the Hybrid Helm Operator.
Important

The Hybrid Helm Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

This tutorial walks through the following process using the Hybrid Helm Operator:

  • Create a Memcached deployment through a Helm chart if it does not exist
  • Ensure that the deployment size is the same as specified by Memcached custom resource (CR) spec
  • Create a MemcachedBackup deployment by using the Go API

5.5.6.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.5.6.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/github.com/example/memcached-operator
  2. Change to the directory:

    $ cd $HOME/github.com/example/memcached-operator
  3. Run the operator-sdk init command to initialize the project. This example uses a domain of my.domain so that all API groups are <group>.my.domain:

    $ operator-sdk init \
        --plugins=hybrid.helm.sdk.operatorframework.io \
        --project-version="3" \
        --domain my.domain \
        --repo=github.com/example/memcached-operator

    The init command generates the RBAC rules in the config/rbac/role.yaml file based on the resources that would be deployed by the chart’s default manifests. Verify that the rules generated in the config/rbac/role.yaml file meet your Operator’s permission requirements.

Additional resources

  • This procedure creates a project structure that is compatible with both Helm and Go APIs. To learn more about the project directory structure, see Project layout.

5.5.6.3. Creating a Helm API

Use the Operator SDK CLI to create a Helm API.

Procedure

  • Run the following command to create a Helm API with group cache, version v1, and kind Memcached:

    $ operator-sdk create api \
        --plugins helm.sdk.operatorframework.io/v1 \
        --group cache \
        --version v1 \
        --kind Memcached
Note

This procedure also configures your Operator project to watch the Memcached resource with API version v1 and scaffolds a boilerplate Helm chart. Instead of creating the project from the boilerplate Helm chart scaffolded by the Operator SDK, you can alternatively use an existing chart from your local file system or remote chart repository.

For more details and examples for creating Helm API based on existing or new charts, run the following command:

$ operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help

Additional resources

5.5.6.3.1. Operator logic for the Helm API

By default, your scaffolded Operator project watches Memcached resource events as shown in the watches.yaml file and executes Helm releases using the specified chart.

Example 5.2. Example watches.yaml file

# Use the 'create api' subcommand to add watches to this file.
- group: cache.my.domain
  version: v1
  kind: Memcached
  chart: helm-charts/memcached
#+kubebuilder:scaffold:watch

Additional resources

5.5.6.3.2. Custom Helm reconciler configurations using provided library APIs

A disadvantage of existing Helm-based Operators is the inability to configure the Helm reconciler, because it is abstracted from users. For a Helm-based Operator to reach the Seamless Upgrades capability (level II and later) that reuses an already existing Helm chart, a hybrid between the Go and Helm Operator types adds value.

The APIs provided in the helm-operator-plugins library allow Operator authors to make the following configurations:

  • Customize value mapping based on cluster state
  • Execute code in specific events by configuring the reconciler’s event recorder
  • Customize the reconciler’s logger
  • Setup Install, Upgrade, and Uninstall annotations to enable Helm’s actions to be configured based on the annotations found in custom resources watched by the reconciler
  • Configure the reconciler to run with Pre and Post hooks

The above configurations to the reconciler can be done in the main.go file:

Example main.go file

// Operator's main.go
// With the help of helpers provided in the library, the reconciler can be
// configured here before starting the controller with this reconciler.
reconciler := reconciler.New(
 reconciler.WithChart(*chart),
 reconciler.WithGroupVersionKind(gvk),
)

if err := reconciler.SetupWithManager(mgr); err != nil {
 panic(fmt.Sprintf("unable to create reconciler: %s", err))
}

5.5.6.4. Creating a Go API

Use the Operator SDK CLI to create a Go API.

Procedure

  1. Run the following command to create a Go API with group cache, version v1, and kind MemcachedBackup:

    $ operator-sdk create api \
        --group=cache \
        --version v1 \
        --kind MemcachedBackup \
        --resource \
        --controller \
        --plugins=go/v3
  2. When prompted, enter y for creating both resource and controller:

    $ Create Resource [y/n]
    y
    Create Controller [y/n]
    y

This procedure generates the MemcachedBackup resource API at api/v1/memcachedbackup_types.go and the controller at controllers/memcachedbackup_controller.go.

5.5.6.4.1. Defining the API

Define the API for the MemcachedBackup custom resource (CR).

Represent this Go API by defining the MemcachedBackup type, which will have a MemcachedBackupSpec.Size field to set the quantity of Memcached backup instances (CRs) to be deployed, and a MemcachedBackupStatus.Nodes field to store a CR’s pod names.

Note

The Node field is used to illustrate an example of a Status field.

Procedure

  1. Define the API for the MemcachedBackup CR by modifying the Go type definitions in the api/v1/memcachedbackup_types.go file to have the following spec and status:

    Example 5.3. Example api/v1/memcachedbackup_types.go file

    // MemcachedBackupSpec defines the desired state of MemcachedBackup
    type MemcachedBackupSpec struct {
    	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    	// Important: Run "make" to regenerate code after modifying this file
    
    	//+kubebuilder:validation:Minimum=0
    	// Size is the size of the memcached deployment
    	Size int32 `json:"size"`
    }
    
    // MemcachedBackupStatus defines the observed state of MemcachedBackup
    type MemcachedBackupStatus struct {
    	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    	// Important: Run "make" to regenerate code after modifying this file
    	// Nodes are the names of the memcached pods
    	Nodes []string `json:"nodes"`
    }
  2. Update the generated code for the resource type:

    $ make generate
    Tip

    After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type.

  3. After the API is defined with spec and status fields and CRD validation markers, generate and update the CRD manifests:

    $ make manifests

This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.my.domain_memcachedbackups.yaml file.

5.5.6.4.2. Controller implementation

The controller in this tutorial performs the following actions:

  • Create a Memcached deployment if it does not exist.
  • Ensure that the deployment size is the same as specified by the Memcached CR spec.
  • Update the Memcached CR status with the names of the memcached pods.

For a detailed explanation on how to configure the controller to perform the above mentioned actions, see Implementing the controller in the Operator SDK tutorial for standard Go-based Operators.

5.5.6.4.3. Differences in main.go

For standard Go-based Operators and the Hybrid Helm Operator, the main.go file handles the scaffolding the initialization and running of the Manager program for the Go API. For the Hybrid Helm Operator, however, the main.go file also exposes the logic for loading the watches.yaml file and configuring the Helm reconciler.

Example 5.4. Example main.go file

...
	for _, w := range ws {
		// Register controller with the factory
		reconcilePeriod := defaultReconcilePeriod
		if w.ReconcilePeriod != nil {
			reconcilePeriod = w.ReconcilePeriod.Duration
		}

		maxConcurrentReconciles := defaultMaxConcurrentReconciles
		if w.MaxConcurrentReconciles != nil {
			maxConcurrentReconciles = *w.MaxConcurrentReconciles
		}

		r, err := reconciler.New(
			reconciler.WithChart(*w.Chart),
			reconciler.WithGroupVersionKind(w.GroupVersionKind),
			reconciler.WithOverrideValues(w.OverrideValues),
			reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources),
			reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles),
			reconciler.WithReconcilePeriod(reconcilePeriod),
			reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...),
			reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...),
			reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...),
		)
...

The manager is initialized with both Helm and Go reconcilers:

Example 5.5. Example Helm and Go reconcilers

...
// Setup manager with Go API
   if err = (&controllers.MemcachedBackupReconciler{
		Client: mgr.GetClient(),
		Scheme: mgr.GetScheme(),
	}).SetupWithManager(mgr); err != nil {
		setupLog.Error(err, "unable to create controller", "controller", "MemcachedBackup")
		os.Exit(1)
	}

   ...
// Setup manager with Helm API
	for _, w := range ws {

      ...
		if err := r.SetupWithManager(mgr); err != nil {
			setupLog.Error(err, "unable to create controller", "controller", "Helm")
			os.Exit(1)
		}
		setupLog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
	}

// Start the manager
   if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
		setupLog.Error(err, "problem running manager")
		os.Exit(1)
	}
5.5.6.4.4. Permissions and RBAC manifests

The controller requires certain role-based access control (RBAC) permissions to interact with the resources it manages. For the Go API, these are specified with RBAC markers, as shown in the Operator SDK tutorial for standard Go-based Operators.

For the Helm API, the permissions are scaffolded by default in roles.yaml. Currently, however, due to a known issue when the Go API is scaffolded, the permissions for the Helm API are overwritten. As a result of this issue, ensure that the permissions defined in roles.yaml match your requirements.

Note

The following is an example role.yaml for a Memcached Operator:

Example 5.6. Example Helm and Go reconcilers

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: manager-role
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cache.my.domain
  resources:
  - memcachedbackups
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cache.my.domain
  resources:
  - memcachedbackups/finalizers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  - services
  - services/finalizers
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  - serviceaccounts
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cache.my.domain
  resources:
  - memcachedbackups/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - policy
  resources:
  - events
  - poddisruptionbudgets
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - cache.my.domain
  resources:
  - memcacheds
  - memcacheds/status
  - memcacheds/finalizers
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

5.5.6.5. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure

  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator locally:

    $ make install run

5.5.6.6. Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Run the following command to verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.5.6.7. Creating custom resources

After your Operator is installed, you can test it by creating custom resources (CRs) that are now provided on the cluster by the Operator.

Procedure

  1. Change to the namespace where your Operator is installed:

    $ oc project <project_name>-system
  2. Update the sample Memcached CR manifest at the config/samples/cache_v1_memcached.yaml file by updating the replicaCount field to 3:

    Example 5.7. Example config/samples/cache_v1_memcached.yaml file

    apiVersion: cache.my.domain/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    spec:
      # Default values copied from <project_dir>/helm-charts/memcached/values.yaml
      affinity: {}
      autoscaling:
        enabled: false
        maxReplicas: 100
        minReplicas: 1
        targetCPUUtilizationPercentage: 80
      fullnameOverride: ""
      image:
        pullPolicy: IfNotPresent
        repository: nginx
        tag: ""
      imagePullSecrets: []
      ingress:
        annotations: {}
        className: ""
        enabled: false
        hosts:
        - host: chart-example.local
          paths:
          - path: /
            pathType: ImplementationSpecific
        tls: []
      nameOverride: ""
      nodeSelector: {}
      podAnnotations: {}
      podSecurityContext: {}
      replicaCount: 3
      resources: {}
      securityContext: {}
      service:
        port: 80
        type: ClusterIP
      serviceAccount:
        annotations: {}
        create: true
        name: ""
      tolerations: []
  3. Create the Memcached CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get pods

    Example output

    NAME                                  READY     STATUS    RESTARTS   AGE
    memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          18m
    memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          18m
    memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          18m

  5. Update the sample MemcachedBackup CR manifest at the config/samples/cache_v1_memcachedbackup.yaml file by updating the size to 2:

    Example 5.8. Example config/samples/cache_v1_memcachedbackup.yaml file

    apiVersion: cache.my.domain/v1
    kind: MemcachedBackup
    metadata:
      name: memcachedbackup-sample
    spec:
      size: 2
  6. Create the MemcachedBackup CR:

    $ oc apply -f config/samples/cache_v1_memcachedbackup.yaml
  7. Ensure that the count of memcachedbackup pods is the same as specified in the CR:

    $ oc get pods

    Example output

    NAME                                        READY     STATUS    RESTARTS   AGE
    memcachedbackup-sample-8649699989-4bbzg     1/1       Running   0          22m
    memcachedbackup-sample-8649699989-mq6mx     1/1       Running   0          22m

  8. You can update the spec in each of the above CRs, and then apply them again. The controller reconciles again and ensures that the size of the pods is as specified in the spec of the respective CRs.
  9. Clean up the resources that have been created as part of this tutorial:

    1. Delete the Memcached resource:

      $ oc delete -f config/samples/cache_v1_memcached.yaml
    2. Delete the MemcachedBackup resource:

      $ oc delete -f config/samples/cache_v1_memcachedbackup.yaml
    3. If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy

5.5.6.8. Project layout

The Hybrid Helm Operator scaffolding is customized to be compatible with both Helm and Go APIs.

File/foldersPurpose

Dockerfile

Instructions used by a container engine to build your Operator image with the make docker-build command.

Makefile

Build file with helper targets to help you work with your project.

PROJECT

YAML file containing metadata information for the Operator. Represents the project’s configuration and is used to track useful information for the CLI and plugins.

bin/

Contains useful binaries such as the manager which is used to run your project locally and the kustomize utility used for the project configuration.

config/

Contains configuration files, including all Kustomize manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.

config/crd/
Contains custom resource definitions (CRDs).
config/default/
Contains a Kustomize base for launching the controller in a standard configuration.
config/manager/
Contains the manifests to launch your Operator project as pods on the cluster.
config/manifests/
Contains the base to generate your OLM manifests in the bundle/ directory.
config/prometheus/
Contains the manifests required to enable project to serve metrics to Prometheus such as the ServiceMonitor resource.
config/scorecard/
Contains the manifests required to allow you test your project with the scorecard tool.
config/rbac/
Contains the RBAC permissions required to run your project.
config/samples/
Contains samples for custom resources.

api/

Contains the Go API definition.

internal/controllers/

Contains the controllers for the Go API.

hack/

Contains utility files, such as the file used to scaffold the license header for your project files.

main.go

Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the apis/ directory and starts all controllers in the controllers/ directory.

helm-charts/

Contains the Helm charts which can be specified using the create api command with the Helm plugin.

watches.yaml

Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches.

5.5.7. Updating Hybrid Helm-based projects for newer Operator SDK versions

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.

5.5.7.1. Updating Hybrid Helm-based Operator projects for Operator SDK 1.31.0

The following procedure updates an existing Hybrid Helm-based Operator project for compatibility with 1.31.0.

Prerequisites

  • Operator SDK 1.31.0 installed
  • An Operator project created or maintained with Operator SDK 1.28.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.31.0-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.31.0-ocp

  2. Find the ose-kube-rbac-proxy pull spec in the following files, and update the image tag to v4.16:

    • config/default/manager_auth_proxy_patch.yaml
    • bundle/manifests/memcached-operator.clusterserviceversion.yaml

    Example ose-kube-rbac-proxy pull spec with v4.16 image tag

    # ...
          containers:
          - name: kube-rbac-proxy
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16
    # ...

5.5.7.2. Additional resources

5.6. Java-based Operators

5.6.1. Getting started with Operator SDK for Java-based Operators

Important

Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To demonstrate the basics of setting up and running a Java-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Java-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.6.1.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Java 11+
  • Maven 3.6.3+
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.6.1.2. Creating and deploying Java-based Operators

You can build and deploy a simple Java-based Operator for Memcached by using the Operator SDK.

Procedure

  1. Create a project.

    1. Create your project directory:

      $ mkdir memcached-operator
    2. Change into the project directory:

      $ cd memcached-operator
    3. Run the operator-sdk init command with the quarkus plugin to initialize the project:

      $ operator-sdk init \
          --plugins=quarkus \
          --domain=example.com \
          --project-name=memcached-operator
  2. Create an API.

    Create a simple Memcached API:

    $ operator-sdk create api \
        --plugins quarkus \
        --group cache \
        --version v1 \
        --kind Memcached
  3. Build and push the Operator image.

    Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to:

    $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
  4. Run the Operator.

    1. Install the CRD:

      $ make install
    2. Deploy the project to the cluster. Set IMG to the image that you pushed:

      $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  5. Create a sample custom resource (CR).

    1. Create a sample CR:

      $ oc apply -f config/samples/cache_v1_memcached.yaml \
          -n memcached-operator-system
    2. Watch for the CR to reconcile the Operator:

      $ oc logs deployment.apps/memcached-operator-controller-manager \
          -c manager \
          -n memcached-operator-system
  6. Delete a CR.

    Delete a CR by running the following command:

    $ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
  7. Clean up.

    Run the following command to clean up the resources that have been created as part of this procedure:

    $ make undeploy

5.6.1.3. Next steps

5.6.2. Operator SDK tutorial for Java-based Operators

Important

Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK
The operator-sdk CLI tool and java-operator-sdk library API
Operator Lifecycle Manager (OLM)
Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
Note

This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators.

5.6.2.1. Prerequisites

  • Operator SDK CLI installed
  • OpenShift CLI (oc) 4.16+ installed
  • Java 11+
  • Maven 3.6.3+
  • Logged into an OpenShift Container Platform 4.16 cluster with oc with an account that has cluster-admin permissions
  • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

5.6.2.2. Creating a project

Use the Operator SDK CLI to create a project called memcached-operator.

Procedure

  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/memcached-operator
  2. Change to the directory:

    $ cd $HOME/projects/memcached-operator
  3. Run the operator-sdk init command with the quarkus plugin to initialize the project:

    $ operator-sdk init \
        --plugins=quarkus \
        --domain=example.com \
        --project-name=memcached-operator
5.6.2.2.1. PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Java. For example:

domain: example.com
layout:
- quarkus.javaoperatorsdk.io/v1-alpha
projectName: memcached-operator
version: "3"

5.6.2.3. Creating an API and controller

Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.

Procedure

  1. Run the following command to create an API:

    $ operator-sdk create api \
        --plugins=quarkus \1
        --group=cache \2
        --version=v1 \3
        --kind=Memcached 4
    1
    Set the plugin flag to quarkus.
    2
    Set the group flag to cache.
    3
    Set the version flag to v1.
    4
    Set the kind flag to Memcached.

Verification

  1. Run the tree command to view the file structure:

    $ tree

    Example output

    .
    ├── Makefile
    ├── PROJECT
    ├── pom.xml
    └── src
        └── main
            ├── java
            │   └── com
            │       └── example
            │           ├── Memcached.java
            │           ├── MemcachedReconciler.java
            │           ├── MemcachedSpec.java
            │           └── MemcachedStatus.java
            └── resources
                └── application.properties
    
    6 directories, 8 files

5.6.2.3.1. Defining the API

Define the API for the Memcached custom resource (CR).

Procedure

  • Edit the following files that were generated as part of the create api process:

    1. Update the following attributes in the MemcachedSpec.java file to define the desired state of the Memcached CR:

      public class MemcachedSpec {
      
          private Integer size;
      
          public Integer getSize() {
              return size;
          }
      
          public void setSize(Integer size) {
              this.size = size;
          }
      }
    2. Update the following attributes in the MemcachedStatus.java file to define the observed state of the Memcached CR:

      Note

      The example below illustrates a Node status field. It is recommended that you use typical status properties in practice.

      import java.util.ArrayList;
      import java.util.List;
      
      public class MemcachedStatus {
      
          // Add Status information here
          // Nodes are the names of the memcached pods
          private List<String> nodes;
      
          public List<String> getNodes() {
              if (nodes == null) {
                  nodes = new ArrayList<>();
              }
              return nodes;
          }
      
          public void setNodes(List<String> nodes) {
              this.nodes = nodes;
          }
      }
    3. Update the Memcached.java file to define the Schema for Memcached APIs that extends to both MemcachedSpec.java and MemcachedStatus.java files.

      @Version("v1")
      @Group("cache.example.com")
      public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}
5.6.2.3.2. Generating CRD manifests

After the API is defined with MemcachedSpec and MemcachedStatus files, you can generate CRD manifests.

Procedure

  • Run the following command from the memcached-operator directory to generate the CRD:

    $ mvn clean install

Verification

  • Verify the contents of the CRD in the target/kubernetes/memcacheds.cache.example.com-v1.yml file as shown in the following example:

    $ cat target/kubernetes/memcacheds.cache.example.com-v1.yaml

    Example output

    # Generated by Fabric8 CRDGenerator, manual edits might get overwritten!
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      name: memcacheds.cache.example.com
    spec:
      group: cache.example.com
      names:
        kind: Memcached
        plural: memcacheds
        singular: memcached
      scope: Namespaced
      versions:
      - name: v1
        schema:
          openAPIV3Schema:
            properties:
              spec:
                properties:
                  size:
                    type: integer
                type: object
              status:
                properties:
                  nodes:
                    items:
                      type: string
                    type: array
                type: object
            type: object
        served: true
        storage: true
        subresources:
          status: {}

5.6.2.3.3. Creating a Custom Resource

After generating the CRD manifests, you can create the Custom Resource (CR).

Procedure

  • Create a Memcached CR called memcached-sample.yaml:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    spec:
      # Add spec fields here
      size: 1

5.6.2.4. Implementing the controller

After creating a new API and controller, you can implement the controller logic.

Procedure

  1. Append the following dependency to the pom.xml file:

        <dependency>
          <groupId>commons-collections</groupId>
          <artifactId>commons-collections</artifactId>
          <version>3.2.2</version>
        </dependency>
  2. For this example, replace the generated controller file MemcachedReconciler.java with following example implementation:

    Example 5.9. Example MemcachedReconciler.java

    package com.example;
    
    import io.fabric8.kubernetes.client.KubernetesClient;
    import io.javaoperatorsdk.operator.api.reconciler.Context;
    import io.javaoperatorsdk.operator.api.reconciler.Reconciler;
    import io.javaoperatorsdk.operator.api.reconciler.UpdateControl;
    import io.fabric8.kubernetes.api.model.ContainerBuilder;
    import io.fabric8.kubernetes.api.model.ContainerPortBuilder;
    import io.fabric8.kubernetes.api.model.LabelSelectorBuilder;
    import io.fabric8.kubernetes.api.model.ObjectMetaBuilder;
    import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder;
    import io.fabric8.kubernetes.api.model.Pod;
    import io.fabric8.kubernetes.api.model.PodSpecBuilder;
    import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder;
    import io.fabric8.kubernetes.api.model.apps.Deployment;
    import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder;
    import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder;
    import org.apache.commons.collections.CollectionUtils;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.stream.Collectors;
    
    public class MemcachedReconciler implements Reconciler<Memcached> {
      private final KubernetesClient client;
    
      public MemcachedReconciler(KubernetesClient client) {
        this.client = client;
      }
    
      // TODO Fill in the rest of the reconciler
    
      @Override
      public UpdateControl<Memcached> reconcile(
          Memcached resource, Context context) {
          // TODO: fill in logic
          Deployment deployment = client.apps()
                  .deployments()
                  .inNamespace(resource.getMetadata().getNamespace())
                  .withName(resource.getMetadata().getName())
                  .get();
    
          if (deployment == null) {
              Deployment newDeployment = createMemcachedDeployment(resource);
              client.apps().deployments().create(newDeployment);
              return UpdateControl.noUpdate();
          }
    
          int currentReplicas = deployment.getSpec().getReplicas();
          int requiredReplicas = resource.getSpec().getSize();
    
          if (currentReplicas != requiredReplicas) {
              deployment.getSpec().setReplicas(requiredReplicas);
              client.apps().deployments().createOrReplace(deployment);
              return UpdateControl.noUpdate();
          }
    
          List<Pod> pods = client.pods()
              .inNamespace(resource.getMetadata().getNamespace())
              .withLabels(labelsForMemcached(resource))
              .list()
              .getItems();
    
          List<String> podNames =
              pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
    
    
          if (resource.getStatus() == null
                   || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
               if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
               resource.getStatus().setNodes(podNames);
               return UpdateControl.updateResource(resource);
          }
    
          return UpdateControl.noUpdate();
      }
    
      private Map<String, String> labelsForMemcached(Memcached m) {
        Map<String, String> labels = new HashMap<>();
        labels.put("app", "memcached");
        labels.put("memcached_cr", m.getMetadata().getName());
        return labels;
      }
    
      private Deployment createMemcachedDeployment(Memcached m) {
          Deployment deployment = new DeploymentBuilder()
              .withMetadata(
                  new ObjectMetaBuilder()
                      .withName(m.getMetadata().getName())
                      .withNamespace(m.getMetadata().getNamespace())
                      .build())
              .withSpec(
                  new DeploymentSpecBuilder()
                      .withReplicas(m.getSpec().getSize())
                      .withSelector(
                          new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
                      .withTemplate(
                          new PodTemplateSpecBuilder()
                              .withMetadata(
                                  new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
                              .withSpec(
                                  new PodSpecBuilder()
                                      .withContainers(
                                          new ContainerBuilder()
                                              .withImage("memcached:1.4.36-alpine")
                                              .withName("memcached")
                                              .withCommand("memcached", "-m=64", "-o", "modern", "-v")
                                              .withPorts(
                                                  new ContainerPortBuilder()
                                                      .withContainerPort(11211)
                                                      .withName("memcached")
                                                      .build())
                                              .build())
                                      .build())
                              .build())
                      .build())
              .build();
        deployment.addOwnerReference(m);
        return deployment;
      }
    }

    The example controller runs the following reconciliation logic for each Memcached custom resource (CR):

    • Creates a Memcached deployment if it does not exist.
    • Ensures that the deployment size matches the size specified by the Memcached CR spec.
    • Updates the Memcached CR status with the names of the memcached pods.

The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.

5.6.2.4.1. Reconcile loop
  1. Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Deployment argument, as shown in the following example:

            Deployment deployment = client.apps()
                    .deployments()
                    .inNamespace(resource.getMetadata().getNamespace())
                    .withName(resource.getMetadata().getName())
                    .get();
  2. As shown in the following example, if the Deployment is null, the deployment needs to be created. After you create the Deployment, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value of UpdateControl.noUpdate(), otherwise, return the value of `UpdateControl.updateStatus(resource):

            if (deployment == null) {
                Deployment newDeployment = createMemcachedDeployment(resource);
                client.apps().deployments().create(newDeployment);
                return UpdateControl.noUpdate();
            }
  3. After getting the Deployment, get the current and required replicas, as shown in the following example:

            int currentReplicas = deployment.getSpec().getReplicas();
            int requiredReplicas = resource.getSpec().getSize();
  4. If currentReplicas does not match the requiredReplicas, you must update the Deployment, as shown in the following example:

            if (currentReplicas != requiredReplicas) {
                deployment.getSpec().setReplicas(requiredReplicas);
                client.apps().deployments().createOrReplace(deployment);
                return UpdateControl.noUpdate();
            }
  5. The following example shows how to obtain the list of pods and their names:

            List<Pod> pods = client.pods()
                .inNamespace(resource.getMetadata().getNamespace())
                .withLabels(labelsForMemcached(resource))
                .list()
                .getItems();
    
            List<String> podNames =
                pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
  6. Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:

            if (resource.getStatus() == null
                    || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
                if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
                resource.getStatus().setNodes(podNames);
                return UpdateControl.updateResource(resource);
            }
5.6.2.4.2. Defining labelsForMemcached

labelsForMemcached is a utility to return a map of the labels to attach to the resources:

    private Map<String, String> labelsForMemcached(Memcached m) {
        Map<String, String> labels = new HashMap<>();
        labels.put("app", "memcached");
        labels.put("memcached_cr", m.getMetadata().getName());
        return labels;
    }
5.6.2.4.3. Define the createMemcachedDeployment

The createMemcachedDeployment method uses the fabric8 DeploymentBuilder class:

    private Deployment createMemcachedDeployment(Memcached m) {
        Deployment deployment = new DeploymentBuilder()
            .withMetadata(
                new ObjectMetaBuilder()
                    .withName(m.getMetadata().getName())
                    .withNamespace(m.getMetadata().getNamespace())
                    .build())
            .withSpec(
                new DeploymentSpecBuilder()
                    .withReplicas(m.getSpec().getSize())
                    .withSelector(
                        new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
                    .withTemplate(
                        new PodTemplateSpecBuilder()
                            .withMetadata(
                                new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
                            .withSpec(
                                new PodSpecBuilder()
                                    .withContainers(
                                        new ContainerBuilder()
                                            .withImage("memcached:1.4.36-alpine")
                                            .withName("memcached")
                                            .withCommand("memcached", "-m=64", "-o", "modern", "-v")
                                            .withPorts(
                                                new ContainerPortBuilder()
                                                    .withContainerPort(11211)
                                                    .withName("memcached")
                                                    .build())
                                            .build())
                                    .build())
                            .build())
                    .build())
            .build();
      deployment.addOwnerReference(m);
      return deployment;
    }

5.6.2.5. Running the Operator

There are three ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.
  • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.5.1. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure

  1. Run the following command to compile the Operator:

    $ mvn clean install

    Example output

    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time:  11.193 s
    [INFO] Finished at: 2021-05-26T12:16:54-04:00
    [INFO] ------------------------------------------------------------------------

  2. Run the following command to install the CRD to the default namespace:

    $ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml

    Example output

    customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created

  3. Create a file called rbac.yaml as shown in the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: memcached-operator-admin
    subjects:
    - kind: ServiceAccount
      name: memcached-quarkus-operator-operator
      namespace: <operator_namespace>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: ""
  4. Run the following command to grant cluster-admin privileges to the memcached-quarkus-operator-operator by applying the rbac.yaml file:

    $ oc apply -f rbac.yaml
  5. Enter the following command to run the Operator:

    $ java -jar target/quarkus-app/quarkus-run.jar
    Note

    The java command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.

  6. Apply the memcached-sample.yaml file with the following command:

    $ kubectl apply -f memcached-sample.yaml

    Example output

    memcached.cache.example.com/memcached-sample created

Verification

  • Run the following command to confirm that the pod has started:

    $ oc get all

    Example output

    NAME                                                       READY   STATUS    RESTARTS   AGE
    pod/memcached-sample-6c765df685-mfqnz                      1/1     Running   0          18s

5.6.2.5.2. Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to install the CRD to the default namespace:

    $ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml

    Example output

    customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created

  3. Create a file called rbac.yaml as shown in the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: memcached-operator-admin
    subjects:
    - kind: ServiceAccount
      name: memcached-quarkus-operator-operator
      namespace: <operator_namespace>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: ""
    Important

    The rbac.yaml file will be applied at a later step.

  4. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  5. Run the following command to grant cluster-admin privileges to the memcached-quarkus-operator-operator by applying the rbac.yaml file created in a previous step:

    $ oc apply -f rbac.yaml
  6. Run the following command to verify that the Operator is running:

    $ oc get all -n default

    Example output

    NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
    pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm   0/1       Running   0           18s

  7. Run the following command to apply the memcached-sample.yaml and create the memcached-sample pod:

    $ oc apply -f memcached-sample.yaml

    Example output

    memcached.cache.example.com/memcached-sample created

Verification

  • Run the following command to confirm the pods have started:

    $ oc get all

    Example output

    NAME                                                       READY   STATUS    RESTARTS   AGE
    pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt   1/1     Running   1          79s
    pod/memcached-sample-6c765df685-mfqnz                      1/1     Running   0          18s

5.6.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.6.2.5.3.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4.16+ installed
  • Operator project initialized by using the Operator SDK

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.6.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of OpenShift Container Platform 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.

5.6.2.6. Additional resources

5.6.3. Project layout for Java-based Operators

Important

Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.6.3.1. Java-based project layout

Java-based Operator projects generated by the operator-sdk init command contain the following files and directories:

File or directoryPurpose

pom.xml

File that contains the dependencies required to run the Operator.

<domain>/

Directory that contains the files that represent the API. If the domain is example.com, this folder is called example/.

MemcachedReconciler.java

Java file that defines controller implementations.

MemcachedSpec.java

Java file that defines the desired state of the Memcached CR.

MemcachedStatus.java

Java file that defines the observed state of the Memcached CR.

Memcached.java

Java file that defines the Schema for Memcached APIs.

target/kubernetes/

Directory that contains the CRD yaml files.

5.6.4. Updating projects for newer Operator SDK versions

OpenShift Container Platform 4.16 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.

5.6.4.1. Updating Java-based Operator projects for Operator SDK 1.31.0

The following procedure updates an existing Java-based Operator project for compatibility with 1.31.0.

Prerequisites

  • Operator SDK 1.31.0 installed
  • An Operator project created or maintained with Operator SDK 1.28.0

Procedure

  1. Edit your Operator project’s Makefile to update the Operator SDK version to v1.31.0-ocp, as shown in the following example:

    Example Makefile

    # Set the Operator SDK version to use. By default, what is installed on the system is used.
    # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
    OPERATOR_SDK_VERSION ?= v1.31.0-ocp

  2. Find the ose-kube-rbac-proxy pull spec in the following files, and update the image tag to v4.16:

    • config/default/manager_auth_proxy_patch.yaml
    • bundle/manifests/memcached-operator.clusterserviceversion.yaml

    Example ose-kube-rbac-proxy pull spec with v4.16 image tag

    # ...
          containers:
          - name: kube-rbac-proxy
            image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16
    # ...

5.6.4.2. Additional resources

5.7. Defining cluster service versions (CSVs)

A cluster service version (CSV), defined by a ClusterServiceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.

The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

5.7.1. How CSV generation works

Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.

Typically, the generate kustomize manifests subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle subcommand. However, the Operator SDK provides the make bundle command, which automates several tasks, including running the following subcommands in order:

  1. generate kustomize manifests
  2. generate bundle
  3. bundle validate

Additional resources

5.7.1.1. Generated files and resources

The make bundle command creates the following files and directories in your Operator project:

  • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion (CSV) object
  • A bundle metadata directory named bundle/metadata
  • All custom resource definitions (CRDs) in a config/crd directory
  • A Dockerfile bundle.Dockerfile

The following resources are typically included in a CSV:

Role
Defines Operator permissions within a namespace.
ClusterRole
Defines cluster-wide Operator permissions.
Deployment
Defines how an Operand of an Operator is run in pods.
CustomResourceDefinition (CRD)
Defines custom resources that your Operator reconciles.
Custom resource examples
Examples of resources adhering to the spec of a particular CRD.

5.7.1.2. Version management

The --version flag for the generate bundle subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.

By setting the VERSION variable in your Makefile, the --version flag is automatically invoked using that value when the generate bundle subcommand is run by the make bundle command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.

5.7.2. Manually-defined CSV fields

Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).

Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.

The following tables detail which manually-defined CSV fields are required and which are optional.

Table 5.7. Required CSV fields
FieldDescription

metadata.name

A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example app-operator.v0.1.1.

metadata.capabilities

The capability level according to the Operator maturity model. Options include Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, and Auto Pilot.

spec.displayName

A public name to identify the Operator.

spec.description

A short description of the functionality of the Operator.

spec.keywords

Keywords describing the Operator.

spec.maintainers

Human or organizational entities maintaining the Operator, with a name and email.

spec.provider

The provider of the Operator (usually an organization), with a name.

spec.labels

Key-value pairs to be used by Operator internals.

spec.version

Semantic version of the Operator, for example 0.1.1.

spec.customresourcedefinitions

Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in deploy/. However, several fields not in the CRD manifest spec require user input:

  • description: description of the CRD.
  • resources: any Kubernetes resources leveraged by the CRD, for example Pod and StatefulSet objects.
  • specDescriptors: UI hints for inputs and outputs of the Operator.
Table 5.8. Optional CSV fields
FieldDescription

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version. Options include planning, pre-alpha, alpha, beta, stable, mature, inactive, and deprecated.

Further details on what data each field above should hold are found in the CSV spec.

Note

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.

Additional resources

5.7.3. Operator metadata annotations

Operator developers can set certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub or the Red Hat Ecosystem Catalog. Operator metadata annotations are manually defined by setting the metadata.annotations field in the CSV YAML file.

5.7.3.1. Infrastructure features annotations

Annotations in the features.operators.openshift.io group detail the infrastructure features that an Operator might support, specified by setting a "true" or "false" value. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog. These annotations are supported in OpenShift Container Platform 4.10 and later.

Important

The features.operators.openshift.io infrastructure feature annotations deprecate the operators.openshift.io/infrastructure-features annotations used in earlier versions of OpenShift Container Platform. See "Deprecated infrastructure feature annotations" for more information.

Table 5.9. Infrastructure features annotations
AnnotationDescriptionValid values[1]

features.operators.openshift.io/disconnected

Specify whether an Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. The Operator leverages the spec.relatedImages CSV field to refer to any related image by its digest.

"true" or "false"

features.operators.openshift.io/fips-compliant

Specify whether an Operator accepts the FIPS-140 configuration of the underlying platform and works on nodes that are booted into FIPS mode. In this mode, the Operator and any workloads it manages (operands) are solely calling the Red Hat Enterprise Linux (RHEL) cryptographic library submitted for FIPS-140 validation.

"true" or "false"

features.operators.openshift.io/proxy-aware

Specify whether an Operator supports running on a cluster behind a proxy by accepting the standard HTTP_PROXY and HTTPS_PROXY proxy environment variables. If applicable, the Operator passes this information to the workload it manages (operands).

"true" or "false"

features.operators.openshift.io/tls-profiles

Specify whether an Operator implements well-known tunables to modify the TLS cipher suite used by the Operator and, if applicable, any of the workloads it manages (operands).

"true" or "false"

features.operators.openshift.io/token-auth-aws

Specify whether an Operator supports configuration for tokenized authentication with AWS APIs via AWS Secure Token Service (STS) by using the Cloud Credential Operator (CCO).

"true" or "false"

features.operators.openshift.io/token-auth-azure

Specify whether an Operator supports configuration for tokenized authentication with Azure APIs via Azure Managed Identity by using the Cloud Credential Operator (CCO).

"true" or "false"

features.operators.openshift.io/token-auth-gcp

Specify whether an Operator supports configuration for tokenized authentication with Google Cloud APIs via GCP Workload Identity Foundation (WIF) by using the Cloud Credential Operator (CCO).

"true" or "false"

features.operators.openshift.io/cnf

Specify whether an Operator provides a Cloud-Native Network Function (CNF) Kubernetes plugin.

"true" or "false"

features.operators.openshift.io/cni

Specify whether an Operator provides a Container Network Interface (CNI) Kubernetes plugin.

"true" or "false"

features.operators.openshift.io/csi

Specify whether an Operator provides a Container Storage Interface (CSI) Kubernetes plugin.

"true" or "false"

  1. Valid values are shown intentionally with double quotes, because Kubernetes annotations must be strings.

Example CSV with infrastructure feature annotations

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    features.operators.openshift.io/disconnected: "true"
    features.operators.openshift.io/fips-compliant: "false"
    features.operators.openshift.io/proxy-aware: "false"
    features.operators.openshift.io/tls-profiles: "false"
    features.operators.openshift.io/token-auth-aws: "false"
    features.operators.openshift.io/token-auth-azure: "false"
    features.operators.openshift.io/token-auth-gcp: "false"

5.7.3.2. Deprecated infrastructure feature annotations

Starting in OpenShift Container Platform 4.14, the operators.openshift.io/infrastructure-features group of annotations are deprecated by the group of annotations with the features.operators.openshift.io namespace. While you are encouraged to use the newer annotations, both groups are currently accepted when used in parallel.

These annotations detail the infrastructure features that an Operator supports. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog.

Table 5.10. Deprecated operators.openshift.io/infrastructure-features annotations
Valid annotation valuesDescription

disconnected

Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator.

cnf

Operator provides a Cloud-native Network Functions (CNF) Kubernetes plugin.

cni

Operator provides a Container Network Interface (CNI) Kubernetes plugin.

csi

Operator provides a Container Storage Interface (CSI) Kubernetes plugin.

fips

Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode.

Important

When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

proxy-aware

Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables HTTP_PROXY and HTTPS_PROXY, which Operator Lifecycle Manager (OLM) provides to the Operator automatically when the cluster is configured to use a proxy. Required environment variables are passed down to Operands for managed workloads.

Example CSV with disconnected and proxy-aware support

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'

5.7.3.3. Other optional annotations

The following Operator annotations are optional.

Table 5.11. Other optional annotations
AnnotationDescription

alm-examples

Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize.

operatorframework.io/initialization-resource

Specify a single required custom resource by adding operatorframework.io/initialization-resource annotation to the cluster service version (CSV) during Operator installation. The user is then prompted to create the custom resource through a template provided in the CSV. Must include a template that contains a complete YAML definition.

operatorframework.io/suggested-namespace

Set a suggested namespace where the Operator should be deployed.

operatorframework.io/suggested-namespace-template

Set a manifest for a Namespace object with the default node selector for the namespace specified.

operators.openshift.io/valid-subscription

Free-form array for listing any specific subscriptions that are required to use the Operator. For example, '["3Scale Commercial License", "Red Hat Managed Integration"]'.

operators.operatorframework.io/internal-objects

Hides CRDs in the UI that are not meant for user manipulation.

Example CSV with an OpenShift Container Platform license requirement

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'

Example CSV with a 3scale license requirement

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'

5.7.4. Enabling your Operator for restricted network environments

As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.

Operator requirements for supporting disconnected mode

  • Replace hard-coded image references with environment variables.
  • In the cluster service version (CSV) of your Operator:

    • List any related images, or other container images that your Operator might require to perform their functions.
    • Reference all specified images by a digest (SHA) and not by a tag.
  • All dependencies of your Operator must also support running in a disconnected mode.
  • Your Operator must not require any off-cluster resources.

Prerequisites

  • An Operator project with a CSV. The following procedure uses the Memcached Operator as an example for Go-, Ansible-, and Helm-based projects.

Procedure

  1. Set an environment variable for the additional image references used by the Operator in the config/manager/manager.yaml file:

    Example 5.10. Example config/manager/manager.yaml file

    ...
    spec:
      ...
        spec:
          ...
          containers:
          - command:
            - /manager
            ...
            env:
            - name: <related_image_environment_variable> 1
              value: "<related_image_reference_with_tag>" 2
    1
    Define the environment variable, such as RELATED_IMAGE_MEMCACHED.
    2
    Set the related image reference and tag, such as docker.io/memcached:1.4.36-alpine.
  2. Replace hard-coded image references with environment variables in the relevant file for your Operator project type:

    • For Go-based Operator projects, add the environment variable to the controllers/memcached_controller.go file as shown in the following example:

      Example 5.11. Example controllers/memcached_controller.go file

        // deploymentForMemcached returns a memcached Deployment object
      
      ...
      
      	Spec: corev1.PodSpec{
              	Containers: []corev1.Container{{
      -			Image:   "memcached:1.4.36-alpine", 1
      +			Image:   os.Getenv("<related_image_environment_variable>"), 2
      			Name:    "memcached",
      			Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
      			Ports: []corev1.ContainerPort{{
      
      ...
      1
      Delete the image reference and tag.
      2
      Use the os.Getenv function to call the <related_image_environment_variable>.
      Note

      The os.Getenv function returns an empty string if a variable is not set. Set the <related_image_environment_variable> before changing the file.

    • For Ansible-based Operator projects, add the environment variable to the roles/memcached/tasks/main.yml file as shown in the following example:

      Example 5.12. Example roles/memcached/tasks/main.yml file

      spec:
        containers:
        - name: memcached
          command:
          - memcached
          - -m=64
          - -o
          - modern
          - -v
      -   image: "docker.io/memcached:1.4.36-alpine" 1
      +   image: "{{ lookup('env', '<related_image_environment_variable>') }}" 2
          ports:
            - containerPort: 11211
      
      ...
      1
      Delete the image reference and tag.
      2
      Use the lookup function to call the <related_image_environment_variable>.
    • For Helm-based Operator projects, add the overrideValues field to the watches.yaml file as shown in the following example:

      Example 5.13. Example watches.yaml file

      ...
      - group: demo.example.com
        version: v1alpha1
        kind: Memcached
        chart: helm-charts/memcached
        overrideValues: 1
          relatedImage: ${<related_image_environment_variable>} 2
      1
      Add the overrideValues field.
      2
      Define the overrideValues field by using the <related_image_environment_variable>, such as RELATED_IMAGE_MEMCACHED.
      1. Add the value of the overrideValues field to the helm-charts/memchached/values.yaml file as shown in the following example:

        Example helm-charts/memchached/values.yaml file

        ...
        relatedImage: ""

      2. Edit the chart template in the helm-charts/memcached/templates/deployment.yaml file as shown in the following example:

        Example 5.14. Example helm-charts/memcached/templates/deployment.yaml file

        containers:
          - name: {{ .Chart.Name }}
            securityContext:
              - toYaml {{ .Values.securityContext | nindent 12 }}
            image: "{{ .Values.image.pullPolicy }}
            env: 1
              - name: related_image 2
                value: "{{ .Values.relatedImage }}" 3
        1
        Add the env field.
        2
        Name the environment variable.
        3
        Define the value of the environment variable.
  3. Add the BUNDLE_GEN_FLAGS variable definition to your Makefile with the following changes:

    Example Makefile

       BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)
    
       # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests
       # You can enable this value if you would like to use SHA Based Digests
       # To enable set flag to true
       USE_IMAGE_DIGESTS ?= false
       ifeq ($(USE_IMAGE_DIGESTS), true)
             BUNDLE_GEN_FLAGS += --use-image-digests
       endif
    
    ...
    
    -  $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) 1
    +  $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS) 2
    
    ...

    1
    Delete this line in the Makefile.
    2
    Replace the line above with this line.
  4. To update your Operator image to use a digest (SHA) and not a tag, run the make bundle command and set USE_IMAGE_DIGESTS to true :

    $ make bundle USE_IMAGE_DIGESTS=true
  5. Add the disconnected annotation, which indicates that the Operator works in a disconnected environment:

    metadata:
      annotations:
        operators.openshift.io/infrastructure-features: '["disconnected"]'

    Operators can be filtered in OperatorHub by this infrastructure feature.

5.7.5. Enabling your Operator for multiple architectures and operating systems

Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.

If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:

labels:
    operatorframework.io/arch.<arch>: supported 1
    operatorframework.io/os.<os>: supported 2
1
Set <arch> to a supported string.
2
Set <os> to a supported string.
Note

Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.

If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:

labels:
    operatorframework.io/os.linux: supported

If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:

labels:
    operatorframework.io/arch.amd64: supported

If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.

Prerequisites

  • An Operator project with a CSV.
  • To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
  • For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.

Procedure

  • Add a label in the metadata.labels of your CSV for each supported architecture and operating system that your Operator supports:

    labels:
      operatorframework.io/arch.s390x: supported
      operatorframework.io/os.zos: supported
      operatorframework.io/os.linux: supported 1
      operatorframework.io/arch.amd64: supported 2
    1 2
    After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly.

Additional resources

5.7.5.1. Architecture and operating system support for Operators

The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:

Table 5.12. Architectures supported on OpenShift Container Platform
ArchitectureString

AMD64

amd64

ARM64

arm64

IBM Power®

ppc64le

IBM Z®

s390x

Table 5.13. Operating systems supported on OpenShift Container Platform
Operating systemString

Linux

linux

z/OS

zos

Note

Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.

5.7.6. Setting a suggested namespace

Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.

As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.

Procedure

  • In your CSV, set the operatorframework.io/suggested-namespace annotation to your suggested namespace:

    metadata:
      annotations:
        operatorframework.io/suggested-namespace: <namespace> 1
    1
    Set your suggested namespace.

5.7.7. Setting a suggested namespace with default node selector

Some Operators expect to run only on control plane nodes, which can be done by setting a nodeSelector in the Pod spec by the Operator itself.

To avoid getting duplicated and potentially conflicting cluster-wide default nodeSelector, you can set a default node selector on the namespace where the Operator runs. The default node selector will take precedence over the cluster default so the cluster default will not be applied to the pods in the Operators namespace.

When adding the Operator to a cluster using OperatorHub, the web console auto-populates the suggested namespace for the cluster administrator during the installation process. The suggested namespace is created using the namespace manifest in YAML which is included in the cluster service version (CSV).

Procedure

  • In your CSV, set the operatorframework.io/suggested-namespace-template with a manifest for a Namespace object. The following sample is a manifest for an example Namespace with the namespace default node selector specified:

    metadata:
      annotations:
        operatorframework.io/suggested-namespace-template: 1
          {
            "apiVersion": "v1",
            "kind": "Namespace",
            "metadata": {
              "name": "vertical-pod-autoscaler-suggested-template",
              "annotations": {
                "openshift.io/node-selector": ""
              }
            }
          }
    1
    Set your suggested namespace.
    Note

    If both suggested-namespace and suggested-namespace-template annotations are present in the CSV, suggested-namespace-template should take precedence.

5.7.8. Enabling Operator conditions

Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition custom resource (CR), the behavior of OLM changes accordingly.

To support Operator conditions, an Operator must be able to read the OperatorCondition CR created by OLM and have the ability to complete the following tasks:

  • Get the specific condition.
  • Set the status of a specific condition.

This can be accomplished by using the operator-lib library. An Operator author can provide a controller-runtime client in their Operator for the library to access the OperatorCondition CR owned by the Operator in the cluster.

The library provides a generic Conditions interface, which has the following methods to Get and Set a conditionType in the OperatorCondition CR:

Get
To get the specific condition, the library uses the client.Get function from controller-runtime, which requires an ObjectKey of type types.NamespacedName present in conditionAccessor.
Set
To update the status of the specific condition, the library uses the client.Update function from controller-runtime. An error occurs if the conditionType is not present in the CRD.

The Operator is allowed to modify only the status subresource of the CR. Operators can either delete or update the status.conditions array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream Condition GoDocs.

Note

Operator SDK 1.31.0 supports operator-lib v0.11.0.

Prerequisites

  • An Operator project generated using the Operator SDK.

Procedure

To enable Operator conditions in your Operator project:

  1. In the go.mod file of your Operator project, add operator-framework/operator-lib as a required library:

    module github.com/example-inc/memcached-operator
    
    go 1.19
    
    require (
      k8s.io/apimachinery v0.26.0
      k8s.io/client-go v0.26.0
      sigs.k8s.io/controller-runtime v0.14.1
      operator-framework/operator-lib v0.11.0
    )
  2. Write your own constructor in your Operator logic that will result in the following outcomes:

    • Accepts a controller-runtime client.
    • Accepts a conditionType.
    • Returns a Condition interface to update or add conditions.

    Because OLM currently supports the Upgradeable condition, you can create an interface that has methods to access the Upgradeable condition. For example:

    import (
      ...
      apiv1 "github.com/operator-framework/api/pkg/operators/v1"
    )
    
    func NewUpgradeable(cl client.Client) (Condition, error) {
      return NewCondition(cl, "apiv1.OperatorUpgradeable")
    }
    
    cond, err := NewUpgradeable(cl);

    In this example, the NewUpgradeable constructor is further used to create a variable cond of type Condition. The cond variable would in turn have Get and Set methods, which can be used for handling the OLM Upgradeable condition.

Additional resources

5.7.9. Defining webhooks

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

The cluster service version (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:

  • Admission webhooks (validating and mutating)
  • Conversion webhooks

Procedure

  • Add a webhookdefinitions section to the spec section of the CSV of your Operator and include any webhook definitions using a type of ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook. The following example contains all three types of webhooks:

    CSV containing webhooks

      apiVersion: operators.coreos.com/v1alpha1
      kind: ClusterServiceVersion
      metadata:
        name: webhook-operator.v0.0.1
      spec:
        customresourcedefinitions:
          owned:
          - kind: WebhookTest
            name: webhooktests.webhook.operators.coreos.io 1
            version: v1
        install:
          spec:
            deployments:
            - name: webhook-operator-webhook
              ...
              ...
              ...
          strategy: deployment
        installModes:
        - supported: false
          type: OwnNamespace
        - supported: false
          type: SingleNamespace
        - supported: false
          type: MultiNamespace
        - supported: true
          type: AllNamespaces
        webhookdefinitions:
        - type: ValidatingAdmissionWebhook 2
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: vwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest
        - type: MutatingAdmissionWebhook 3
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: mwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest
        - type: ConversionWebhook 4
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          generateName: cwebhooktest.kb.io
          sideEffects: None
          webhookPath: /convert
          conversionCRDs:
          - webhooktests.webhook.operators.coreos.io 5
    ...

    1
    The CRDs targeted by the conversion webhook must exist here.
    2
    A validating admission webhook.
    3
    A mutating admission webhook.
    4
    A conversion webhook.
    5
    The spec.PreserveUnknownFields property of each CRD must be set to false or nil.

5.7.9.1. Webhook considerations for OLM

When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:

  • The type field must be set to either ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook, or the CSV will be placed in a failed phase.
  • The CSV must contain a deployment whose name is equivalent to the value supplied in the deploymentName field of the webhookdefinition.

When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.

Certificate authority constraints

OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:

  • The TLS certificate file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.crt.
  • The TLS key file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.key.
Admission webhook rules constraints

To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:

  • Requests that target all groups
  • Requests that target the operators.coreos.com group
  • Requests that target the ValidatingWebhookConfigurations or MutatingWebhookConfigurations resources
Conversion webhook constraints

OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:

  • CSVs featuring a conversion webhook can only support the AllNamespaces install mode.
  • The CRD targeted by the conversion webhook must have its spec.preserveUnknownFields field set to false or nil.
  • The conversion webhook defined in the CSV must target an owned CRD.
  • There can only be one conversion webhook on the entire cluster for a given CRD.

5.7.10. Understanding your custom resource definitions (CRDs)

There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.

5.7.10.1. Owned CRDs

The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.

Table 5.14. Owned CRD fields
FieldDescriptionRequired/optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

Kind

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.
  • StatusDescriptors: A reference to fields in the status block of an object.
  • ActionDescriptors: A reference to actions that can be performed on an object.

All descriptors accept the following fields:

  • DisplayName: A human readable name for the Spec, Status, or Action.
  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.
  • Path: A dot-delimited path of the field on the object that this descriptor describes.
  • X-Descriptors: Used to determine which "capabilities" this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OpenShift Container Platform.

Also see the openshift/console project for more information on Descriptors in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:

Example owned CRD

      - displayName: MongoDB Standalone
        group: mongodb.com
        kind: MongoDbStandalone
        name: mongodbstandalones.mongodb.com
        resources:
          - kind: Service
            name: ''
            version: v1
          - kind: StatefulSet
            name: ''
            version: v1beta2
          - kind: Pod
            name: ''
            version: v1
          - kind: ConfigMap
            name: ''
            version: v1
        specDescriptors:
          - description: Credentials for Ops Manager or Cloud Manager.
            displayName: Credentials
            path: credentials
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
          - description: Project this deployment belongs to.
            displayName: Project
            path: project
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
          - description: MongoDB version to be installed.
            displayName: Version
            path: version
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:label'
        statusDescriptors:
          - description: The status of each of the pods for the MongoDB cluster.
            displayName: Pod Status
            path: pods
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
        version: v1
        description: >-
          MongoDB Deployment consisting of only one host. No replication of
          data.

5.7.10.2. Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.

Table 5.15. Required CRD fields
FieldDescriptionRequired/optional

Name

The full name of the CRD you require.

Required

Version

The version of that object API.

Required

Kind

The Kubernetes object kind.

Required

DisplayName

A human readable version of the CRD.

Required

Description

A summary of how the component fits in your larger architecture.

Required

Example required CRD

    required:
    - name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.

5.7.10.3. CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.
  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.10.3.1. Adding a new CRD version

Procedure

To add a new version of a CRD to your Operator:

  1. Add a new entry in the CRD resource under the versions section of your CSV.

    For example, if the current CRD has a version v1alpha1 and you want to add a new version v1beta1 and mark it as the new storage version, add a new entry for v1beta1:

    versions:
      - name: v1alpha1
        served: true
        storage: false
      - name: v1beta1 1
        served: true
        storage: true
    1
    New entry.
  2. Ensure the referencing version of the CRD in the owned section of your CSV is updated if the CSV intends to use the new version:

    customresourcedefinitions:
      owned:
      - name: cluster.example.com
        version: v1beta1 1
        kind: cluster
        displayName: Cluster
    1
    Update the version.
  3. Push the updated CRD and CSV to your bundle.
5.7.10.3.2. Deprecating or removing a CRD version

Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.

Procedure

To deprecate and remove a specific version of a CRD:

  1. Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:

    versions:
      - name: v1alpha1
        served: false 1
        storage: true
    1
    Set to false.
  2. Switch the storage version to a serving version if the version to be deprecated is currently the storage version. For example:

    versions:
      - name: v1alpha1
        served: false
        storage: false 1
      - name: v1beta1
        served: true
        storage: true 2
    1 2
    Update the storage fields accordingly.
    Note

    To remove a specific version that is or was the storage version from a CRD, that version must be removed from the storedVersion in the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.

  3. Upgrade the CRD with the above changes.
  4. In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:

    versions:
      - name: v1beta1
        served: true
        storage: true
  5. Ensure the referencing CRD version in the owned section of your CSV is updated accordingly if that version is removed from the CRD.

5.7.10.4. CRD templates

Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

metadata:
  annotations:
    alm-examples: >-
      [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"<operator_namespace>"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

5.7.10.5. Hiding internal objects

It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.

As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.

Procedure

  1. Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec block of your CR, if applicable to your Operator.
  2. Add the operators.operatorframework.io/internal-objects annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:

    Internal object annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' 1
    ...

    1
    Set any internal CRDs as an array of strings.

5.7.10.6. Initializing required custom resources

An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.

As an Operator developer, you can specify a single required custom resource by adding operatorframework.io/initialization-resource to the cluster service version (CSV) during Operator installation. You are then prompted to create the custom resource through a template that is provided in the CSV. The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.

If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.

Procedure

  • Add the operatorframework.io/initialization-resource annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of a StorageCluster resource and provides a full YAML definition:

    Initialization resource annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operatorframework.io/initialization-resource: |-
            {
                "apiVersion": "ocs.openshift.io/v1",
                "kind": "StorageCluster",
                "metadata": {
                    "name": "example-storagecluster"
                },
                "spec": {
                    "manageNodes": false,
                    "monPVCTemplate": {
                        "spec": {
                            "accessModes": [
                                "ReadWriteOnce"
                            ],
                            "resources": {
                                "requests": {
                                    "storage": "10Gi"
                                }
                            },
                            "storageClassName": "gp2"
                        }
                    },
                    "storageDeviceSets": [
                        {
                            "count": 3,
                            "dataPVCTemplate": {
                                "spec": {
                                    "accessModes": [
                                        "ReadWriteOnce"
                                    ],
                                    "resources": {
                                        "requests": {
                                            "storage": "1Ti"
                                        }
                                    },
                                    "storageClassName": "gp2",
                                    "volumeMode": "Block"
                                }
                            },
                            "name": "example-deviceset",
                            "placement": {},
                            "portable": true,
                            "resources": {}
                        }
                    ]
                }
            }
    ...

5.7.11. Understanding your API services

As with CRDs, there are two types of API services that your Operator may use: owned and required.

5.7.11.1. Owned API services

When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.

An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 5.16. Owned API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

Name

The plural name for the API service provided.

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the InstallStrategy of your CSV for a Deployment spec with a matching name, and if not found, does not transition the CSV to the "Install Ready" phase.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

Resources

Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

5.7.11.1.1. API service resource creation

Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:

  • Service pod selectors are copied from the CSV deployment matching the DeploymentName field of the API service description.
  • A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.11.1.2. API service serving certificates

OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.

The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.

If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.

5.7.11.2. Required API services

OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.

Table 5.17. Required API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

5.8. Working with bundle images

You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.8.1. Bundling an Operator

The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • OpenShift CLI (oc) v4.16+ installed
  • Operator project initialized by using the Operator SDK
  • If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>

5.8.2. Deploying an Operator with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions
  • If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform

Procedure

  • Enter the following command to run the Operator on the cluster:

    $ operator-sdk run bundle \1
        -n <namespace> \2
        <registry>/<user>/<bundle_image_name>:<tag> 3
    1
    The run bundle command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM.
    2
    Optional: By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.
    3
    If you do not specify an image, the command uses quay.io/operator-framework/opm:latest as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
    Important

    As of OpenShift Container Platform 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

    This command performs the following actions:

    • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
    • Deploy your Operator to your cluster by creating an OperatorGroup, Subscription, InstallPlan, and all other required resources, including RBAC.

Additional resources

5.8.3. Publishing a catalog containing a bundled Operator

To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.

Note

The Operator SDK uses the opm CLI to facilitate index image creation. Experience with the opm command is not required. For advanced use cases, the opm command can be used directly instead of the Operator SDK.

Prerequisites

  • Operator SDK CLI installed on a development workstation
  • Operator bundle image built and pushed to a registry
  • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.16)
  • Logged in to the cluster with oc using an account with cluster-admin permissions

Procedure

  1. Run the following make command in your Operator project directory to build an index image containing your Operator bundle:

    $ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>

    where the CATALOG_IMG argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

  2. Push the built index image to a repository:

    $ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
    Tip

    You can use Operator SDK make commands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:

    $ make bundle-build bundle-push catalog-build catalog-push \
        BUNDLE_IMG=<bundle_image_pull_spec> \
        CATALOG_IMG=<index_image_pull_spec>

    Alternatively, you can set the IMAGE_TAG_BASE field in your Makefile to an existing repository:

    IMAGE_TAG_BASE=quay.io/example/my-operator

    You can then use the following syntax to build and push images with automatically-generated names, such as quay.io/example/my-operator-bundle:v0.0.1 for the bundle image and quay.io/example/my-operator-catalog:v0.0.1 for the index image:

    $ make bundle-build bundle-push catalog-build catalog-push
  3. Define a CatalogSource object that references the index image you just generated, and then create the object by using the oc apply command or web console:

    Example CatalogSource YAML

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: cs-memcached
      namespace: <operator_namespace>
    spec:
      displayName: My Test
      publisher: Company
      sourceType: grpc
      grpcPodConfig:
        securityContextConfig: <security_mode> 1
      image: quay.io/example/memcached-catalog:v0.0.1 2
      updateStrategy:
        registryPoll:
          interval: 10m

    1
    Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OpenShift Container Platform release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
    2
    Set image to the image pull spec you used previously with the CATALOG_IMG argument.
  4. Check the catalog source:

    $ oc get catalogsource

    Example output

    NAME           DISPLAY     TYPE   PUBLISHER   AGE
    cs-memcached   My Test     grpc   Company     4h31m

Verification

  1. Install the Operator using your catalog:

    1. Define an OperatorGroup object and create it by using the oc apply command or web console:

      Example OperatorGroup YAML

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: my-test
        namespace: <operator_namespace>
      spec:
        targetNamespaces:
        - <operator_namespace>

    2. Define a Subscription object and create it by using the oc apply command or web console:

      Example Subscription YAML

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: catalogtest
        namespace: <catalog_namespace>
      spec:
        channel: "alpha"
        installPlanApproval: Manual
        name: catalog
        source: cs-memcached
        sourceNamespace: <operator_namespace>
        startingCSV: memcached-operator.v0.0.1

  2. Verify the installed Operator is running:

    1. Check the Operator group:

      $ oc get og

      Example output

      NAME             AGE
      my-test           4h40m

    2. Check the cluster service version (CSV):

      $ oc get csv

      Example output

      NAME                        DISPLAY   VERSION   REPLACES   PHASE
      memcached-operator.v0.0.1   Test      0.0.1                Succeeded

    3. Check the pods for the Operator:

      $ oc get pods

      Example output

      NAME                                                              READY   STATUS      RESTARTS   AGE
      9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6   0/1     Completed   0          4h33m
      catalog-controller-manager-7fd5b7b987-69s4n                       2/2     Running     0          4h32m
      cs-memcached-7622r                                                1/1     Running     0          4h33m

Additional resources

5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager

You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.

The run bundle-upgrade subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.

Prerequisites

  • Operator installed with OLM either by using the run bundle subcommand or with traditional OLM installation
  • A bundle image that represents a later version of the installed Operator

Procedure

  1. If your Operator has not already been installed with OLM, install the earlier version either by using the run bundle subcommand or with traditional OLM installation.

    Note

    If the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the run bundle-upgrade subcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).

    For example, you can use the following run bundle subcommand for a Memcached Operator by specifying the earlier bundle image:

    $ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1

    Example output

    INFO[0006] Creating a File-Based Catalog of the bundle "quay.io/demo/memcached-operator:v0.0.1"
    INFO[0008] Generated a valid File-Based Catalog
    INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1
    INFO[0012] Created CatalogSource: memcached-operator-catalog
    INFO[0012] OperatorGroup "operator-sdk-og" created
    INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub
    INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub
    INFO[0015] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase
    INFO[0015] Waiting for ClusterServiceVersion ""my-project/memcached-operator.v0.0.1" to appear
    INFO[0026] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Pending
    INFO[0028] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Installing
    INFO[0059] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Succeeded
    INFO[0059] OLM has successfully installed "memcached-operator.v0.0.1"

  2. Upgrade the installed Operator by specifying the bundle image for the later Operator version:

    $ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2

    Example output

    INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project
    INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project
    INFO[0008] Generated a valid Upgraded File-Based Catalog
    INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2
    INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations
    INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1"
    INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub
    INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase
    INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Pending
    INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady
    INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing
    INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded
    INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"

  3. Clean up the installed Operators:

    $ operator-sdk cleanup memcached-operator

5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions

Important

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.

As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.

When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.

Tip

You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:

APIRemovedInNextReleaseInUse
APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse
APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.

If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.

The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.

This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.

Important

Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.

Prerequisites

  • An existing Operator project

Procedure

  1. If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the olm.maxOpenShiftVersion annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:

    Important

    You must use olm.maxOpenShiftVersion annotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, administrators may uninstall your Operator and can upgrade the cluster version.

    Example CSV with olm.maxOpenShiftVersion annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      annotations:
        "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]' 1

    1
    Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting value to 4.9 prevents cluster upgrades to OpenShift Container Platform versions later than 4.9 when this bundle is installed on a cluster.
  2. If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:

    Note

    This step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".

    1. Set the com.redhat.openshift.versions annotation in your project’s bundle/metadata/annotations.yaml file:

      Example bundle/metadata/annotations.yaml file with compatible versions

      com.redhat.openshift.versions: "v4.7-v4.9" 1

      1
      Set to a range or single version.
    2. To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper com.redhat.openshift.versions label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update the bundle.Dockerfile file:

      Example bundle.Dockerfile with compatible versions

      LABEL com.redhat.openshift.versions="<versions>" 1

      1
      Set to a range or single version, for example, v4.7-v4.9. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.

You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.

Additional resources

5.8.6. Additional resources

5.9. Complying with pod security admission

Pod security admission is an implementation of the Kubernetes pod security standards. Pod security admission restricts the behavior of pods. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.

If your Operator project does not require escalated permissions to run, you can ensure your workloads run in namespaces set to the restricted pod security level. If your Operator project requires escalated permissions to run, you must set the following security context configurations:

  • The allowed pod security admission level for the Operator’s namespace
  • The allowed security context constraints (SCC) for the workload’s service account

For more information, see Understanding and managing pod security admission.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.9.1. About pod security admission

OpenShift Container Platform includes Kubernetes pod security admission. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.

Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits.

You can also configure the pod security admission settings at the namespace level.

Important

Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.

The following default projects are considered highly privileged: default, kube-public, kube-system, openshift, openshift-infra, openshift-node, and other system-created projects that have the openshift.io/run-level label set to 0 or 1. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.

5.9.1.1. Pod security admission modes

You can configure the following pod security admission modes for a namespace:

Table 5.18. Pod security admission modes
ModeLabelDescription

enforce

pod-security.kubernetes.io/enforce

Rejects a pod from admission if it does not comply with the set profile

audit

pod-security.kubernetes.io/audit

Logs audit events if a pod does not comply with the set profile

warn

pod-security.kubernetes.io/warn

Displays warnings if a pod does not comply with the set profile

5.9.1.2. Pod security admission profiles

You can set each of the pod security admission modes to one of the following profiles:

Table 5.19. Pod security admission profiles
ProfileDescription

privileged

Least restrictive policy; allows for known privilege escalation

baseline

Minimally restrictive policy; prevents known privilege escalations

restricted

Most restrictive policy; follows current pod hardening best practices

5.9.1.3. Privileged namespaces

The following system namespaces are always set to the privileged pod security admission profile:

  • default
  • kube-public
  • kube-system

You cannot change the pod security profile for these privileged namespaces.

5.9.2. About pod security admission synchronization

In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace.

The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created.

Namespace labeling is based on consideration of namespace-local service account privileges.

Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.

5.9.2.1. Pod security admission synchronization namespace exclusions

Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-* prefixed namespaces, but you can enable synchronization on them later.

Important

If a pod security admission label (pod-security.kubernetes.io/<mode>) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label.

If necessary, you can enable synchronization again by using one of the following methods:

  • By removing the modified pod security admission label from the namespace
  • By setting the security.openshift.io/scc.podSecurityLabelSync label to true

    If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten.

Permanently disabled namespaces

Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled:

  • default
  • kube-node-lease
  • kube-system
  • kube-public
  • openshift
  • All system-created namespaces that are prefixed with openshift- , except for openshift-operators
Initially disabled namespaces

By default, all namespaces that have an openshift- prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-* namespaces and for the openshift-operators namespace.

Note

You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators.

If an Operator is installed in a user-created openshift-* namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace.

5.9.3. Ensuring Operator workloads run in namespaces set to the restricted pod security level

To ensure your Operator project can run on a wide variety of deployments and environments, configure the Operator’s workloads to run in namespaces set to the restricted pod security level.

Warning

You must leave the runAsUser field empty. If your image requires a specific user, it cannot be run under restricted security context constraints (SCC) and restricted pod security enforcement.

Procedure

  • To configure Operator workloads to run in namespaces set to the restricted pod security level, edit your Operator’s namespace definition similar to the following examples:

    Important

    It is recommended that you set the seccomp profile in your Operator’s namespace definition. However, setting the seccomp profile is not supported in OpenShift Container Platform 4.10.

    • For Operator projects that must run in only OpenShift Container Platform 4.11 and later, edit your Operator’s namespace definition similar to the following example:

      Example config/manager/manager.yaml file

      ...
      spec:
       securityContext:
         seccompProfile:
           type: RuntimeDefault 1
         runAsNonRoot: true
       containers:
         - name: <operator_workload_container>
           securityContext:
             allowPrivilegeEscalation: false
             capabilities:
               drop:
                 - ALL
      ...

      1
      By setting the seccomp profile type to RuntimeDefault, the SCC defaults to the pod security profile of the namespace.
    • For Operator projects that must also run in OpenShift Container Platform 4.10, edit your Operator’s namespace definition similar to the following example:

      Example config/manager/manager.yaml file

      ...
      spec:
       securityContext: 1
         runAsNonRoot: true
       containers:
         - name: <operator_workload_container>
           securityContext:
             allowPrivilegeEscalation: false
             capabilities:
               drop:
                 - ALL
      ...

      1
      Leaving the seccomp profile type unset ensures your Operator project can run in OpenShift Container Platform 4.10.

5.9.4. Managing pod security admission for Operator workloads that require escalated permissions

If your Operator project requires escalated permissions to run, you must edit your Operator’s cluster service version (CSV).

Procedure

  1. Set the security context configuration to the required permission level in your Operator’s CSV, similar to the following example:

    Example <operator_name>.clusterserviceversion.yaml file with network administrator privileges

    ...
    containers:
       - name: my-container
         securityContext:
           allowPrivilegeEscalation: false
           capabilities:
             add:
               - "NET_ADMIN"
    ...

  2. Set the service account privileges that allow your Operator’s workloads to use the required security context constraints (SCC), similar to the following example:

    Example <operator_name>.clusterserviceversion.yaml file

    ...
      install:
        spec:
          clusterPermissions:
          - rules:
            - apiGroups:
              - security.openshift.io
              resourceNames:
              - privileged
              resources:
              - securitycontextconstraints
              verbs:
              - use
            serviceAccountName: default
    ...

  3. Edit your Operator’s CSV description to explain why your Operator project requires escalated permissions similar to the following example:

    Example <operator_name>.clusterserviceversion.yaml file

    ...
    spec:
      apiservicedefinitions:{}
      ...
    description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.

5.9.5. Additional resources

5.10. Token authentication

5.10.1. Token authentication for Operators on cloud providers

Many cloud providers can enable authentication by using account tokens that provide short-term, limited-privilege security credentials.

OpenShift Container Platform includes the Cloud Credential Operator (CCO) to manage cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with any specific permissions required.

Previously, on clusters where the CCO is in manual mode, Operators managed by Operator Lifecycle Manager (OLM) often provided detailed instructions in the OperatorHub for how users could manually provision any required cloud credentials.

Starting in OpenShift Container Platform 4.14, the CCO can detect when it is running on clusters enabled to use short-term credentials on certain cloud providers. It can then semi-automate provisioning certain credentials, provided that the Operator author has enabled their Operator to support the updated CCO.

5.10.2. CCO-based workflow for OLM-managed Operators with AWS STS

When an OpenShift Container Platform cluster running on AWS is in Security Token Service (STS) mode, it means the cluster is utilizing features of AWS and OpenShift Container Platform to use IAM roles at an application level. STS enables applications to provide a JSON Web Token (JWT) that can assume an IAM role.

The JWT includes an Amazon Resource Name (ARN) for the sts:AssumeRoleWithWebIdentity IAM action to allow temporarily-granted permission for the service account. The JWT contains the signing keys for the ProjectedServiceAccountToken that AWS IAM can validate. The service account token itself, which is signed, is used as the JWT required for assuming the AWS role.

The Cloud Credential Operator (CCO) is a cluster Operator installed by default in OpenShift Container Platform clusters running on cloud providers. For the purposes of STS, the CCO provides the following functions:

  • Detects when it is running on an STS-enabled cluster
  • Checks for the presence of fields in the CredentialsRequest object that provide the required information for granting Operators access to AWS resources

The CCO performs this detection even when in manual mode. When properly configured, the CCO projects a Secret object with the required access information into the Operator namespace.

Starting in OpenShift Container Platform 4.14, the CCO can semi-automate this task through an expanded use of CredentialsRequest objects, which can request the creation of Secrets that contain the information required for STS workflows. Users can provide a role ARN when installing the Operator from either the web console or CLI.

Note

Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.

As an Operator author preparing an Operator for use alongside the updated CCO in OpenShift Container Platform 4.14 or later, you should instruct users and add code to handle the divergence from earlier CCO versions, in addition to handling STS token authentication (if your Operator is not already STS-enabled). The recommended method is to provide a CredentialsRequest object with correctly filled STS-related fields and let the CCO create the Secret for you.

Important

If you plan to support OpenShift Container Platform clusters earlier than version 4.14, consider providing users with instructions on how to manually create a secret with the STS-enabling information by using the CCO utility (ccoctl). Earlier CCO versions are unaware of STS mode on the cluster and cannot create secrets for you.

Your code should check for secrets that never appear and warn users to follow the fallback instructions you have provided. For more information, see the "Alternative method" subsection.

5.10.2.1. Enabling Operators to support CCO-based workflows with AWS STS

As an Operator author designing your project to run on Operator Lifecycle Manager (OLM), you can enable your Operator to authenticate against AWS on STS-enabled OpenShift Container Platform clusters by customizing your project to support the Cloud Credential Operator (CCO).

With this method, the Operator is responsible for creating the CredentialsRequest object, which means the Operator requires RBAC permission to create these objects. Then, the Operator must be able to read the resulting Secret object.

Note

By default, pods related to the Operator deployment mount a serviceAccountToken volume so that the service account token can be referenced in the resulting Secret object.

Prerequisities

  • OpenShift Container Platform 4.14 or later
  • Cluster in STS mode
  • OLM-based Operator project

Procedure

  1. Update your Operator project’s ClusterServiceVersion (CSV) object:

    1. Ensure your Operator has RBAC permission to create CredentialsRequests objects:

      Example 5.15. Example clusterPermissions list

      # ...
      install:
        spec:
          clusterPermissions:
          - rules:
            - apiGroups:
              - "cloudcredential.openshift.io"
              resources:
              - credentialsrequests
              verbs:
              - create
              - delete
              - get
              - list
              - patch
              - update
              - watch
    2. Add the following annotation to claim support for this method of CCO-based workflow with AWS STS:

      # ...
      metadata:
       annotations:
         features.operators.openshift.io/token-auth-aws: "true"
  2. Update your Operator project code:

    1. Get the role ARN from the environment variable set on the pod by the Subscription object. For example:

      // Get ENV var
      roleARN := os.Getenv("ROLEARN")
      setupLog.Info("getting role ARN", "role ARN = ", roleARN)
      webIdentityTokenPath := "/var/run/secrets/openshift/serviceaccount/token"
    2. Ensure you have a CredentialsRequest object ready to be patched and applied. For example:

      Example 5.16. Example CredentialsRequest object creation

      import (
         minterv1 "github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1"
         corev1 "k8s.io/api/core/v1"
         metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
      )
      
      var in = minterv1.AWSProviderSpec{
         StatementEntries: []minterv1.StatementEntry{
            {
               Action: []string{
                  "s3:*",
               },
               Effect:   "Allow",
               Resource: "arn:aws:s3:*:*:*",
            },
         },
      	STSIAMRoleARN: "<role_arn>",
      }
      
      var codec = minterv1.Codec
      var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject())
      
      const (
         name      = "<credential_request_name>"
         namespace = "<namespace_name>"
      )
      
      var CredentialsRequestTemplate = &minterv1.CredentialsRequest{
         ObjectMeta: metav1.ObjectMeta{
             Name:      name,
             Namespace: "openshift-cloud-credential-operator",
         },
         Spec: minterv1.CredentialsRequestSpec{
            ProviderSpec: ProviderSpec,
            SecretRef: corev1.ObjectReference{
               Name:      "<secret_name>",
               Namespace: namespace,
            },
            ServiceAccountNames: []string{
               "<service_account_name>",
            },
            CloudTokenPath:   "",
         },
      }

      Alternatively, if you are starting from a CredentialsRequest object in YAML form (for example, as part of your Operator project code), you can handle it differently:

      Example 5.17. Example CredentialsRequest object creation in YAML form

      // CredentialsRequest is a struct that represents a request for credentials
      type CredentialsRequest struct {
        APIVersion string `yaml:"apiVersion"`
        Kind       string `yaml:"kind"`
        Metadata   struct {
           Name      string `yaml:"name"`
           Namespace string `yaml:"namespace"`
        } `yaml:"metadata"`
        Spec struct {
           SecretRef struct {
              Name      string `yaml:"name"`
              Namespace string `yaml:"namespace"`
           } `yaml:"secretRef"`
           ProviderSpec struct {
              APIVersion     string `yaml:"apiVersion"`
              Kind           string `yaml:"kind"`
              StatementEntries []struct {
                 Effect   string   `yaml:"effect"`
                 Action   []string `yaml:"action"`
                 Resource string   `yaml:"resource"`
              } `yaml:"statementEntries"`
              STSIAMRoleARN   string `yaml:"stsIAMRoleARN"`
           } `yaml:"providerSpec"`
      
           // added new field
            CloudTokenPath   string `yaml:"cloudTokenPath"`
        } `yaml:"spec"`
      }
      
      // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments
      // It unmarshals the YAML file to a CredentialsRequest object and adds the token information.
      func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) {
        // open a file containing YAML form of a CredentialsRequest
        file, err := os.Open(fileName)
        if err != nil {
           return nil, err
        }
        defer file.Close()
      
        // create a new CredentialsRequest object
        cr := &CredentialsRequest{}
      
        // decode the yaml file to the object
        decoder := yaml.NewDecoder(file)
        err = decoder.Decode(cr)
        if err != nil {
           return nil, err
        }
      
        // assign the string to the existing field in the object
        cr.Spec.CloudTokenPath = tokenPath
      
        // return the modified object
        return cr, nil
      }
      Note

      Adding a CredentialsRequest object to the Operator bundle is not currently supported.

    3. Add the role ARN and web identity token path to the credentials request and apply it during Operator initialization:

      Example 5.18. Example applying CredentialsRequest object during Operator initialization

      // apply credentialsRequest on install
      credReq := credreq.CredentialsRequestTemplate
      credReq.Spec.CloudTokenPath = webIdentityTokenPath
      
      c := mgr.GetClient()
      if err := c.Create(context.TODO(), credReq); err != nil {
         if !errors.IsAlreadyExists(err) {
            setupLog.Error(err, "unable to create CredRequest")
            os.Exit(1)
         }
      }
    4. Ensure your Operator can wait for a Secret object to show up from the CCO, as shown in the following example, which is called along with the other items you are reconciling in your Operator:

      Example 5.19. Example wait for Secret object

      // WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 "k8s.io/api/core/v1" name as arguments
      // It waits until the secret object with the given name exists in the given namespace
      // It returns the secret object or an error if the timeout is exceeded
      func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) {
        // set a timeout of 10 minutes
        timeout := time.After(10 * time.Minute) 1
      
        // set a polling interval of 10 seconds
        ticker := time.NewTicker(10 * time.Second)
      
        // loop until the timeout or the secret is found
        for {
           select {
           case <-timeout:
              // timeout is exceeded, return an error
              return nil, fmt.Errorf("timed out waiting for secret %s in namespace %s", name, namespace)
                 // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS
           case <-ticker.C:
              // polling interval is reached, try to get the secret
              secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{})
              if err != nil {
                 if errors.IsNotFound(err) {
                    // secret does not exist yet, continue waiting
                    continue
                 } else {
                    // some other error occurred, return it
                    return nil, err
                 }
              } else {
                 // secret is found, return it
                 return secret, nil
              }
           }
        }
      }
      1
      The timeout value is based on an estimate of how fast the CCO might detect an added CredentialsRequest object and generate a Secret object. You might consider lowering the time or creating custom feedback for cluster administrators that could be wondering why the Operator is not yet accessing the cloud resources.
    5. Set up the AWS configuration by reading the secret created by the CCO from the credentials request and creating the AWS config file containing the data from that secret:

      Example 5.20. Example AWS configuration creation

      func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) {
         var data []byte
         switch {
         case len(secret.Data["credentials"]) > 0:
             data = secret.Data["credentials"]
         default:
             return "", errors.New("invalid secret for aws credentials")
         }
      
      
         f, err := ioutil.TempFile("", "aws-shared-credentials")
         if err != nil {
             return "", errors.Wrap(err, "failed to create file for shared credentials")
         }
         defer f.Close()
         if _, err := f.Write(data); err != nil {
             return "", errors.Wrapf(err, "failed to write credentials to %s", f.Name())
         }
         return f.Name(), nil
      }
      Important

      The secret is assumed to exist, but your Operator code should wait and retry when using this secret to give time to the CCO to create the secret.

      Additionally, the wait period should eventually time out and warn users that the OpenShift Container Platform cluster version, and therefore the CCO, might be an earlier version that does not support the CredentialsRequest object workflow with STS detection. In such cases, instruct users that they must add a secret by using another method.

    6. Configure the AWS SDK session, for example:

      Example 5.21. Example AWS SDK session configuration

      sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret)
      if err != nil {
         // handle error
      }
      options := session.Options{
         SharedConfigState: session.SharedConfigEnable,
         SharedConfigFiles: []string{sharedCredentialsFile},
      }

5.10.2.2. Role specification

The Operator description should contain the specifics of the role required to be created before installation, ideally in the form of a script that the administrator can run. For example:

Example 5.22. Example role creation script

#!/bin/bash
set -x

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e "s/^https:\/\///")
NAMESPACE=my-namespace
SERVICE_ACCOUNT_NAME="my-service-account"
POLICY_ARN_STRINGS="arn:aws:iam::aws:policy/AmazonS3FullAccess"


read -r -d '' TRUST_RELATIONSHIP <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Principal": {
       "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
     },
     "Action": "sts:AssumeRoleWithWebIdentity",
     "Condition": {
       "StringEquals": {
         "${OIDC_PROVIDER}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
       }
     }
   }
 ]
}
EOF

echo "${TRUST_RELATIONSHIP}" > trust.json

aws iam create-role --role-name "$SERVICE_ACCOUNT_NAME" --assume-role-policy-document file://trust.json --description "role for demo"

while IFS= read -r POLICY_ARN; do
   echo -n "Attaching $POLICY_ARN ... "
   aws iam attach-role-policy \
       --role-name "$SERVICE_ACCOUNT_NAME" \
       --policy-arn "${POLICY_ARN}"
   echo "ok."
done <<< "$POLICY_ARN_STRINGS"

5.10.2.3. Troubleshooting

5.10.2.3.1. Authentication failure

If authentication was not successful, ensure you can assume the role with web identity by using the token provided to the Operator.

Procedure

  1. Extract the token from the pod:

    $ oc exec operator-pod -n <namespace_name> \
        -- cat /var/run/secrets/openshift/serviceaccount/token
  2. Extract the role ARN from the pod:

    $ oc exec operator-pod -n <namespace_name> \
        -- cat /<path>/<to>/<secret_name> 1
    1
    Do not use root for the path.
  3. Try assuming the role with the web identity token:

    $ aws sts assume-role-with-web-identity \
        --role-arn $ROLEARN \
        --role-session-name <session_name> \
        --web-identity-token $TOKEN
5.10.2.3.2. Secret not mounting correctly

Pods that run as non-root users cannot write to the /root directory where the AWS shared credentials file is expected to exist by default. If the secret is not mounting correctly to the AWS credentials file path, consider mounting the secret to a different location and enabling the shared credentials file option in the AWS SDK.

5.10.2.4. Alternative method

As an alternative method for Operator authors, you can indicate that the user is responsible for creating the CredentialsRequest object for the Cloud Credential Operator (CCO) before installing the Operator.

The Operator instructions must indicate the following to users:

  • Provide a YAML version of a CredentialsRequest object, either by providing the YAML inline in the instructions or pointing users to a download location
  • Instruct the user to create the CredentialsRequest object

In OpenShift Container Platform 4.14 and later, after the CredentialsRequest object appears on the cluster with the appropriate STS information added, the Operator can then read the CCO-generated Secret or mount it, having defined the mount in the cluster service version (CSV).

For earlier versions of OpenShift Container Platform, the Operator instructions must also indicate the following to users:

  • Use the CCO utility (ccoctl) to generate the Secret YAML object from the CredentialsRequest object
  • Apply the Secret object to the cluster in the appropriate namespace

The Operator still must be able to consume the resulting secret to communicate with cloud APIs. Because in this case the secret is created by the user before the Operator is installed, the Operator can do either of the following:

  • Define an explicit mount in the Deployment object within the CSV
  • Programmatically read the Secret object from the API server, as shown in the recommended "Enabling Operators to support CCO-based workflows with AWS STS" method

5.10.3. CCO-based workflow for OLM-managed Operators with Microsoft Entra Workload ID

When an OpenShift Container Platform cluster running on Azure is in Workload Identity / Federated Identity mode, it means the cluster is utilizing features of Azure and OpenShift Container Platform to apply user-assigned managed identities or app registrations in Microsoft Entra Workload ID at an application level.

The Cloud Credential Operator (CCO) is a cluster Operator installed by default in OpenShift Container Platform clusters running on cloud providers. Starting in OpenShift Container Platform 4.14.8, the CCO supports workflows for OLM-managed Operators with Workload ID.

For the purposes of Workload ID, the CCO provides the following functions:

  • Detects when it is running on an Workload ID-enabled cluster
  • Checks for the presence of fields in the CredentialsRequest object that provide the required information for granting Operators access to Azure resources

The CCO can semi-automate this process through an expanded use of CredentialsRequest objects, which can request the creation of Secrets that contain the information required for Workload ID workflows.

Note

Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.

As an Operator author preparing an Operator for use alongside the updated CCO in OpenShift Container Platform 4.14 and later, you should instruct users and add code to handle the divergence from earlier CCO versions, in addition to handling Workload ID token authentication (if your Operator is not already enabled). The recommended method is to provide a CredentialsRequest object with correctly filled Workload ID-related fields and let the CCO create the Secret object for you.

Important

If you plan to support OpenShift Container Platform clusters earlier than version 4.14, consider providing users with instructions on how to manually create a secret with the Workload ID-enabling information by using the CCO utility (ccoctl). Earlier CCO versions are unaware of Workload ID mode on the cluster and cannot create secrets for you.

Your code should check for secrets that never appear and warn users to follow the fallback instructions you have provided.

Authentication with Workload ID requires the following information:

  • azure_client_id
  • azure_tenant_id
  • azure_region
  • azure_subscription_id
  • azure_federated_token_file

The Install Operator page in the web console allows cluster administrators to provide this information at installation time. This information is then propagated to the Subscription object as environment variables on the Operator pod.

5.10.3.1. Enabling Operators to support CCO-based workflows with Microsoft Entra Workload ID

As an Operator author designing your project to run on Operator Lifecycle Manager (OLM), you can enable your Operator to authenticate against Microsoft Entra Workload ID-enabled OpenShift Container Platform clusters by customizing your project to support the Cloud Credential Operator (CCO).

With this method, the Operator is responsible for creating the CredentialsRequest object, which means the Operator requires RBAC permission to create these objects. Then, the Operator must be able to read the resulting Secret object.

Note

By default, pods related to the Operator deployment mount a serviceAccountToken volume so that the service account token can be referenced in the resulting Secret object.

Prerequisities

  • OpenShift Container Platform 4.14 or later
  • Cluster in Workload ID mode
  • OLM-based Operator project

Procedure

  1. Update your Operator project’s ClusterServiceVersion (CSV) object:

    1. Ensure your Operator has RBAC permission to create CredentialsRequests objects:

      Example 5.23. Example clusterPermissions list

      # ...
      install:
        spec:
          clusterPermissions:
          - rules:
            - apiGroups:
              - "cloudcredential.openshift.io"
              resources:
              - credentialsrequests
              verbs:
              - create
              - delete
              - get
              - list
              - patch
              - update
              - watch
    2. Add the following annotation to claim support for this method of CCO-based workflow with Workload ID:

      # ...
      metadata:
       annotations:
         features.operators.openshift.io/token-auth-azure: "true"
  2. Update your Operator project code:

    1. Get the client ID, tenant ID, and subscription ID from the environment variables set on the pod by the Subscription object. For example:

      // Get ENV var
      clientID := os.Getenv("CLIENTID")
      tenantID := os.Getenv("TENANTID")
      subscriptionID := os.Getenv("SUBSCRIPTIONID")
      azureFederatedTokenFile := "/var/run/secrets/openshift/serviceaccount/token"
    2. Ensure you have a CredentialsRequest object ready to be patched and applied.

      Note

      Adding a CredentialsRequest object to the Operator bundle is not currently supported.

    3. Add the Azure credentials information and web identity token path to the credentials request and apply it during Operator initialization:

      Example 5.24. Example applying CredentialsRequest object during Operator initialization

      // apply credentialsRequest on install
      credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID
      credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID
      credReqTemplate.Spec.AzureProviderSpec.AzureRegion = "centralus"
      credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID
      credReqTemplate.CloudTokenPath = azureFederatedTokenFile
      
      c := mgr.GetClient()
      if err := c.Create(context.TODO(), credReq); err != nil {
          if !errors.IsAlreadyExists(err) {
              setupLog.Error(err, "unable to create CredRequest")
              os.Exit(1)
          }
      }
    4. Ensure your Operator can wait for a Secret object to show up from the CCO, as shown in the following example, which is called along with the other items you are reconciling in your Operator:

      Example 5.25. Example wait for Secret object

      // WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 "k8s.io/api/core/v1" name as arguments
      // It waits until the secret object with the given name exists in the given namespace
      // It returns the secret object or an error if the timeout is exceeded
      func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) {
        // set a timeout of 10 minutes
        timeout := time.After(10 * time.Minute) 1
      
        // set a polling interval of 10 seconds
        ticker := time.NewTicker(10 * time.Second)
      
        // loop until the timeout or the secret is found
        for {
           select {
           case <-timeout:
              // timeout is exceeded, return an error
              return nil, fmt.Errorf("timed out waiting for secret %s in namespace %s", name, namespace)
                 // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS
           case <-ticker.C:
              // polling interval is reached, try to get the secret
              secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{})
              if err != nil {
                 if errors.IsNotFound(err) {
                    // secret does not exist yet, continue waiting
                    continue
                 } else {
                    // some other error occurred, return it
                    return nil, err
                 }
              } else {
                 // secret is found, return it
                 return secret, nil
              }
           }
        }
      }
      1
      The timeout value is based on an estimate of how fast the CCO might detect an added CredentialsRequest object and generate a Secret object. You might consider lowering the time or creating custom feedback for cluster administrators that could be wondering why the Operator is not yet accessing the cloud resources.
    5. Read the secret created by the CCO from the CredentialsRequest object to authenticate with Azure and receive the necessary credentials.

5.11. Validating Operators using the scorecard tool

As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:

  • Validate that your Operator project is free of syntax errors and packaged correctly
  • Review suggestions about ways you can improve your Operator
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.11.1. About the scorecard tool

While the Operator SDK bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure, you can use the scorecard command to run tests on your Operator based on a configuration file and test images. These tests are implemented within test images that are configured and constructed to be executed by the scorecard.

The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OpenShift Container Platform. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.

Scorecard workflow

  1. Create all resources required by any related custom resources (CRs) and the Operator
  2. Create a proxy container in the deployment of the Operator to record calls to the API server and run tests
  3. Examine parameters in the CRs

The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.

scorecard command syntax

$ operator-sdk scorecard <bundle_dir_or_image> [flags]

The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.

For further information about the flags, run:

$ operator-sdk scorecard -h

5.11.2. Scorecard configuration

The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named config.yaml, which is generated by the make bundle command, located in your bundle/ directory:

./bundle
...
└── tests
    └── scorecard
        └── config.yaml

Example scorecard configuration file

kind: Configuration
apiversion: scorecard.operatorframework.io/v1alpha3
metadata:
  name: config
stages:
- parallel: true
  tests:
  - image: quay.io/operator-framework/scorecard-test:v1.31.0
    entrypoint:
    - scorecard-test
    - basic-check-spec
    labels:
      suite: basic
      test: basic-check-spec-test
  - image: quay.io/operator-framework/scorecard-test:v1.31.0
    entrypoint:
    - scorecard-test
    - olm-bundle-validation
    labels:
      suite: olm
      test: olm-bundle-validation-test

The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:

Configuration fieldDescription

image

Test container image name that implements a test

entrypoint

Command and arguments that are invoked in the test image to execute a test

labels

Scorecard-defined or custom labels that select which tests to run

5.11.3. Built-in scorecard tests

The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.

Table 5.20. Basic test suite
TestDescriptionShort name

Spec Block Exists

This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a spec block.

basic-check-spec-test

Table 5.21. OLM test suite
TestDescriptionShort name

Bundle Validation

This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library.

olm-bundle-validation-test

Provided APIs Have Validation

This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each spec and status field detected in the CR.

olm-crds-have-validation-test

Owned CRDs Have Resources Listed

This test makes sure that the CRDs for each CR provided via the cr-manifest option have a resources subsection in the owned CRDs section of the ClusterServiceVersion (CSV). If the test detects used resources that are not listed in the resources section, it lists them in the suggestions at the end of the test. Users are required to fill out the resources section after initial code generation for this test to pass.

olm-crds-have-resources-test

Spec Fields With Descriptors

This test verifies that every field in the CRs spec sections has a corresponding descriptor listed in the CSV.

olm-spec-descriptors-test

Status Fields With Descriptors

This test verifies that every field in the CRs status sections have a corresponding descriptor listed in the CSV.

olm-status-descriptors-test

5.11.4. Running the scorecard tool

A default set of Kustomize files are generated by the Operator SDK after running the init command. The default bundle/tests/scorecard/config.yaml file that is generated can be immediately used to run the scorecard tool against your Operator, or you can modify this file to your test specifications.

Prerequisites

  • Operator project generated by using the Operator SDK

Procedure

  1. Generate or regenerate your bundle manifests and metadata for your Operator:

    $ make bundle

    This command automatically adds scorecard annotations to your bundle metadata, which is used by the scorecard command to run tests.

  2. Run the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:

    $ operator-sdk scorecard <bundle_dir_or_image>

5.11.5. Scorecard output

The --output flag for the scorecard command specifies the scorecard results output format: either text or json.

Example 5.26. Example JSON output snippet

{
  "apiVersion": "scorecard.operatorframework.io/v1alpha3",
  "kind": "TestList",
  "items": [
    {
      "kind": "Test",
      "apiVersion": "scorecard.operatorframework.io/v1alpha3",
      "spec": {
        "image": "quay.io/operator-framework/scorecard-test:v1.31.0",
        "entrypoint": [
          "scorecard-test",
          "olm-bundle-validation"
        ],
        "labels": {
          "suite": "olm",
          "test": "olm-bundle-validation-test"
        }
      },
      "status": {
        "results": [
          {
            "name": "olm-bundle-validation",
            "log": "time=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found metadata directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Found annotations file\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test\n",
            "state": "pass"
          }
        ]
      }
    }
  ]
}

Example 5.27. Example text output snippet

--------------------------------------------------------------------------------
Image:      quay.io/operator-framework/scorecard-test:v1.31.0
Entrypoint: [scorecard-test olm-bundle-validation]
Labels:
	"suite":"olm"
	"test":"olm-bundle-validation-test"
Results:
	Name: olm-bundle-validation
	State: pass
	Log:
		time="2020-07-15T03:19:02Z" level=debug msg="Found manifests directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=debug msg="Found metadata directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=debug msg="Getting mediaType info from manifests directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=info msg="Found annotations file" name=bundle-test
		time="2020-07-15T03:19:02Z" level=info msg="Could not find optional dependencies file" name=bundle-test
Note

The output format spec matches the Test type layout.

5.11.6. Selecting tests

Scorecard tests are selected by setting the --selector CLI flag to a set of label strings. If a selector flag is not supplied, then all of the tests within the scorecard configuration file are run.

Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.

Procedure

  1. To select a single test, for example basic-check-spec-test, specify the test by using the --selector flag:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector=test=basic-check-spec-test
  2. To select a suite of tests, for example olm, specify a label that is used by all of the OLM tests:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector=suite=olm
  3. To select multiple tests, specify the test names by using the selector flag using the following syntax:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'

5.11.7. Enabling parallel testing

As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable parallel setting.

By default, or when a stage explicitly sets parallel to false, tests in a stage are run sequentially in the order they are defined in the configuration file. Running tests one at a time is helpful to guarantee that no two tests interact and conflict with each other.

However, if tests are designed to be fully isolated, they can be parallelized.

Procedure

  • To run a set of isolated tests in parallel, include them in the same stage and set parallel to true:

    apiVersion: scorecard.operatorframework.io/v1alpha3
    kind: Configuration
    metadata:
      name: config
    stages:
    - parallel: true 1
      tests:
      - entrypoint:
        - scorecard-test
        - basic-check-spec
        image: quay.io/operator-framework/scorecard-test:v1.31.0
        labels:
          suite: basic
          test: basic-check-spec-test
      - entrypoint:
        - scorecard-test
        - olm-bundle-validation
        image: quay.io/operator-framework/scorecard-test:v1.31.0
        labels:
          suite: olm
          test: olm-bundle-validation-test
    1
    Enables parallel testing

    All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.

5.11.8. Custom scorecard tests

The scorecard tool can run custom tests that follow these mandated conventions:

  • Tests are implemented within a container image
  • Tests accept an entrypoint which include a command and arguments
  • Tests produce v1alpha3 scorecard output in JSON format with no extraneous logging in the test output
  • Tests can obtain the bundle contents at a shared mount point of /bundle
  • Tests can access the Kubernetes API using an in-cluster client connection

Writing custom tests in other programming languages is possible if the test image follows the above guidelines.

The following example shows of a custom test image written in Go:

Example 5.28. Example custom scorecard test

// Copyright 2020 The Operator-SDK Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package main

import (
	"encoding/json"
	"fmt"
	"log"
	"os"

	scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
	apimanifests "github.com/operator-framework/api/pkg/manifests"
)

// This is the custom scorecard test example binary
// As with the Redhat scorecard test image, the bundle that is under
// test is expected to be mounted so that tests can inspect the
// bundle contents as part of their test implementations.
// The actual test is to be run is named and that name is passed
// as an argument to this binary.  This argument mechanism allows
// this binary to run various tests all from within a single
// test image.

const PodBundleRoot = "/bundle"

func main() {
	entrypoint := os.Args[1:]
	if len(entrypoint) == 0 {
		log.Fatal("Test name argument is required")
	}

	// Read the pod's untar'd bundle from a well-known path.
	cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot)
	if err != nil {
		log.Fatal(err.Error())
	}

	var result scapiv1alpha3.TestStatus

	// Names of the custom tests which would be passed in the
	// `operator-sdk` command.
	switch entrypoint[0] {
	case CustomTest1Name:
		result = CustomTest1(cfg)
	case CustomTest2Name:
		result = CustomTest2(cfg)
	default:
		result = printValidTests()
	}

	// Convert scapiv1alpha3.TestResult to json.
	prettyJSON, err := json.MarshalIndent(result, "", "    ")
	if err != nil {
		log.Fatal("Failed to generate json", err)
	}
	fmt.Printf("%s\n", string(prettyJSON))

}

// printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are.
func printValidTests() scapiv1alpha3.TestStatus {
	result := scapiv1alpha3.TestResult{}
	result.State = scapiv1alpha3.FailState
	result.Errors = make([]string, 0)
	result.Suggestions = make([]string, 0)

	str := fmt.Sprintf("Valid tests for this image include: %s %s",
		CustomTest1Name,
		CustomTest2Name)
	result.Errors = append(result.Errors, str)
	return scapiv1alpha3.TestStatus{
		Results: []scapiv1alpha3.TestResult{result},
	}
}

const (
	CustomTest1Name = "customtest1"
	CustomTest2Name = "customtest2"
)

// Define any operator specific custom tests here.
// CustomTest1 and CustomTest2 are example test functions. Relevant operator specific
// test logic is to be implemented in similarly.

func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
	r := scapiv1alpha3.TestResult{}
	r.Name = CustomTest1Name
	r.State = scapiv1alpha3.PassState
	r.Errors = make([]string, 0)
	r.Suggestions = make([]string, 0)
	almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
	if almExamples == "" {
		fmt.Println("no alm-examples in the bundle CSV")
	}

	return wrapResult(r)
}

func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
	r := scapiv1alpha3.TestResult{}
	r.Name = CustomTest2Name
	r.State = scapiv1alpha3.PassState
	r.Errors = make([]string, 0)
	r.Suggestions = make([]string, 0)
	almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
	if almExamples == "" {
		fmt.Println("no alm-examples in the bundle CSV")
	}
	return wrapResult(r)
}

func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus {
	return scapiv1alpha3.TestStatus{
		Results: []scapiv1alpha3.TestResult{r},
	}
}

5.12. Validating Operator bundles

As an Operator author, you can run the bundle validate command in the Operator SDK to validate the content and format of an Operator bundle. You can run the command on a remote Operator bundle image or a local Operator bundle directory.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.12.1. About the bundle validate command

While the Operator SDK scorecard command can run tests on your Operator based on a configuration file and test images, the bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure.

bundle validate command syntax

$ operator-sdk bundle validate <bundle_dir_or_image> <flags>

Note

The bundle validate command runs automatically when you build your bundle using the make bundle command.

Bundle images are pulled from a remote registry and built locally before they are validated. Local bundle directories must contain Operator metadata and manifests. The bundle metadata and manifests must have a structure similar to the following bundle layout:

Example bundle layout

./bundle
  ├── manifests
  │   ├── cache.my.domain_memcacheds.yaml
  │   └── memcached-operator.clusterserviceversion.yaml
  └── metadata
      └── annotations.yaml

Bundle tests pass validation and finish with an exit code of 0 if no errors are detected.

Example output

INFO[0000] All validation tests have completed successfully

Tests fail validation and finish with an exit code of 1 if errors are detected.

Example output

ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD "cache.example.com/v1alpha1, Kind=Memcached" is present in bundle "" but not defined in CSV

Bundle tests that result in warnings can still pass validation with an exit code of 0 as long as no errors are detected. Tests only fail on errors.

Example output

WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found
INFO[0000] All validation tests have completed successfully

For further information about the bundle validate subcommand, run:

$ operator-sdk bundle validate -h

5.12.2. Built-in bundle validate tests

The Operator SDK ships with pre-defined validators arranged into suites. If you run the bundle validate command without specifying a validator, the default test runs. The default test verifies that a bundle adheres to the specifications defined by the Operator Framework community. For more information, see "Bundle format".

You can run optional validators to test for issues such as OperatorHub compatibility or deprecated Kubernetes APIs. Optional validators always run in addition to the default test.

bundle validate command syntax for optional test suites

$ operator-sdk bundle validate <bundle_dir_or_image>
  --select-optional <test_label>

Table 5.22. Addtional bundle validate validators
NameDescriptionLabel

Operator Framework

This validator tests an Operator bundle against the entire suite of validators provided by the Operator Framework.

suite=operatorframework

OperatorHub

This validator tests an Operator bundle for compatibility with OperatorHub.

name=operatorhub

Good Practices

This validator tests whether an Operator bundle complies with good practices as defined by the Operator Framework. It checks for issues, such as an empty CRD description or unsupported Operator Lifecycle Manager (OLM) resources.

name=good-practices

Additional resources

5.12.3. Running the bundle validate command

The default validator runs a test every time you enter the bundle validate command. You can run optional validators using the --select-optional flag. Optional validators run tests in addition to the default test.

Prerequisites

  • Operator project generated by using the Operator SDK

Procedure

  1. If you want to run the default validator against a local bundle directory, enter the following command from your Operator project directory:

    $ operator-sdk bundle validate ./bundle
  2. If you want to run the default validator against a remote Operator bundle image, enter the following command:

    $ operator-sdk bundle validate \
      <bundle_registry>/<bundle_image_name>:<tag>

    where:

    <bundle_registry>
    Specifies the registry where the bundle is hosted, such as quay.io/example.
    <bundle_image_name>
    Specifies the name of the bundle image, such as memcached-operator.
    <tag>

    Specifies the tag of the bundle image, such as v1.31.0.

    Note

    If you want to validate an Operator bundle image, you must host your image in a remote registry. The Operator SDK pulls the image and builds it locally before running tests. The bundle validate command does not support testing local bundle images.

  3. If you want to run an additional validator against an Operator bundle, enter the following command:

    $ operator-sdk bundle validate \
      <bundle_dir_or_image> \
      --select-optional <test_label>

    where:

    <bundle_dir_or_image>
    Specifies the local bundle directory or remote bundle image, such as ~/projects/memcached or quay.io/example/memcached-operator:v1.31.0.
    <test_label>

    Specifies the name of the validator you want to run, such as name=good-practices.

    Example output

    ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object
    WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD "k8sevents.k8s.k8sevent.com" has an empty description

5.12.4. Validating your Operator’s multi-platform readiness

You can validate your Operator’s multi-platform readiness by running the bundle validate command. The command verifies that your Operator project meets the following conditions:

  • Your Operator’s manager image supports the platforms labeled in the cluster service version (CSV) file.
  • Your Operator’s CSV has labels for the supported platforms for Operator Lifecycle Manager (OLM) and OperatorHub.

Procedure

  • Run the following command to validate your Operator project for multiple architecture readiness:

    $ operator-sdk bundle validate ./bundle \
      --select-optional name=multiarch

    Example validation message

    INFO[0020] All validation tests have completed successfully

    Example error message for missing CSV labels in the manager image

    ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]
    ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]
    ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]
    ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]

    Example error message for missing OperatorHub flags

    WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): ["amd64" "arm64" "ppc64le" "s390x"]. Be aware that your Operator manager image ["quay.io/example-org/test-operator:v1alpha1"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures

5.13. High-availability or single-node cluster detection and support

An OpenShift Container Platform cluster can be configured in high-availability (HA) mode, which uses multiple nodes, or in non-HA mode, which uses a single node. A single-node cluster, also known as single-node OpenShift, is likely to have more conservative resource constraints. Therefore, it is important that Operators installed on a single-node cluster can adjust accordingly and still run well.

By accessing the cluster high-availability mode API provided in OpenShift Container Platform, Operator authors can use the Operator SDK to enable their Operator to detect a cluster’s infrastructure topology, either HA or non-HA mode. Custom Operator logic can be developed that uses the detected cluster topology to automatically switch the resource requirements, both for the Operator and for any Operands or workloads it manages, to a profile that best fits the topology.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.13.1. About the cluster high-availability mode API

OpenShift Container Platform provides a cluster high-availability mode API that can be used by Operators to help detect infrastructure topology. The Infrastructure API holds cluster-wide information regarding infrastructure. Operators managed by Operator Lifecycle Manager (OLM) can use the Infrastructure API if they need to configure an Operand or managed workload differently based on the high-availability mode.

In the Infrastructure API, the infrastructureTopology status expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master. The controlPlaneTopology status expresses the expectations for Operands that normally run on control plane nodes.

The default setting for either status is HighlyAvailable, which represents the behavior Operators have in multiple node clusters. The SingleReplica setting is used in single-node clusters, also known as single-node OpenShift, and indicates that Operators should not configure their Operands for high-availability operation.

The OpenShift Container Platform installer sets the controlPlaneTopology and infrastructureTopology status fields based on the replica counts for the cluster when it is created, according to the following rules:

  • When the control plane replica count is less than 3, the controlPlaneTopology status is set to SingleReplica. Otherwise, it is set to HighlyAvailable.
  • When the worker replica count is 0, the control plane nodes are also configured as workers. Therefore, the infrastructureTopology status will be the same as the controlPlaneTopology status.
  • When the worker replica count is 1, the infrastructureTopology is set to SingleReplica. Otherwise, it is set to HighlyAvailable.

5.13.2. Example API usage in Operator projects

As an Operator author, you can update your Operator project to access the Infrastructure API by using normal Kubernetes constructs and the controller-runtime library, as shown in the following examples:

controller-runtime library example

// Simple query
 nn := types.NamespacedName{
 Name: "cluster",
 }
 infraConfig := &configv1.Infrastructure{}
 err = crClient.Get(context.Background(), nn, infraConfig)
 if err != nil {
 return err
 }
 fmt.Printf("using crclient: %v\n", infraConfig.Status.ControlPlaneTopology)
 fmt.Printf("using crclient: %v\n", infraConfig.Status.InfrastructureTopology)

Kubernetes constructs example

operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second)
 infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister()
 infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), "cluster", metav1.GetOptions{})
 if err != nil {
 return err
 }
// fmt.Printf("%v\n", infraConfig)
 fmt.Printf("%v\n", infraConfig.Status.ControlPlaneTopology)
 fmt.Printf("%v\n", infraConfig.Status.InfrastructureTopology)

5.14. Configuring built-in monitoring with Prometheus

This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for authors of Go-based and Ansible-based Operators.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.14.1. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

5.14.2. Exposing custom metrics for Go-based Operators

As an Operator author, you can publish custom metrics by using the global Prometheus registry from the controller-runtime/pkg/metrics library.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters

Procedure

  1. In your Operator SDK project, uncomment the following line in the config/default/kustomization.yaml file:

    ../prometheus
  2. Create a custom controller class to publish additional metrics from the Operator. The following example declares the widgets and widgetFailures collectors as global variables, and then registers them with the init() function in the controller’s package:

    Example 5.29. controllers/memcached_controller_test_metrics.go file

    package controllers
    
    import (
    	"github.com/prometheus/client_golang/prometheus"
    	"sigs.k8s.io/controller-runtime/pkg/metrics"
    )
    
    
    var (
        widgets = prometheus.NewCounter(
            prometheus.CounterOpts{
                Name: "widgets_total",
                Help: "Number of widgets processed",
            },
        )
        widgetFailures = prometheus.NewCounter(
            prometheus.CounterOpts{
                Name: "widget_failures_total",
                Help: "Number of failed widgets",
            },
        )
    )
    
    func init() {
        // Register custom metrics with the global prometheus registry
        metrics.Registry.MustRegister(widgets, widgetFailures)
    }
  3. Record to these collectors from any part of the reconcile loop in the main controller class, which determines the business logic for the metric:

    Example 5.30. controllers/memcached_controller.go file

    func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    	...
    	...
    	// Add metrics
    	widgets.Inc()
    	widgetFailures.Inc()
    
    	return ctrl.Result{}, nil
    }
  4. Build and push the Operator:

    $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
  5. Deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  6. Create role and role binding definitions to allow the service monitor of the Operator to be scraped by the Prometheus instance of the OpenShift Container Platform cluster.

    Roles must be assigned so that service accounts have the permissions to scrape the metrics of the namespace:

    Example 5.31. config/prometheus/role.yaml role

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: prometheus-k8s-role
      namespace: memcached-operator-system
    rules:
      - apiGroups:
          - ""
        resources:
          - endpoints
          - pods
          - services
          - nodes
          - secrets
        verbs:
          - get
          - list
          - watch

    Example 5.32. config/prometheus/rolebinding.yaml role binding

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: prometheus-k8s-rolebinding
      namespace: memcached-operator-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: prometheus-k8s-role
    subjects:
      - kind: ServiceAccount
        name: prometheus-k8s
        namespace: openshift-monitoring
  7. Apply the roles and role bindings for the deployed Operator:

    $ oc apply -f config/prometheus/role.yaml
    $ oc apply -f config/prometheus/rolebinding.yaml
  8. Set the labels for the namespace that you want to scrape, which enables OpenShift cluster monitoring for that namespace:

    $ oc label namespace <operator_namespace> openshift.io/cluster-monitoring="true"

Verification

  • Query and view the metrics in the OpenShift Container Platform web console. You can use the names that were set in the custom controller class, for example widgets_total and widget_failures_total.

5.14.3. Exposing custom metrics for Ansible-based Operators

As an Operator author creating Ansible-based Operators, you can use the Operator SDK’s osdk_metrics module to expose custom Operator and Operand metrics, emit events, and support logging.

Prerequisites

  • Ansible-based Operator generated using the Operator SDK
  • Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters

Procedure

  1. Generate an Ansible-based Operator. This example uses a testmetrics.com domain:

    $ operator-sdk init \
        --plugins=ansible \
        --domain=testmetrics.com
  2. Create a metrics API. This example uses a kind named Testmetrics:

    $ operator-sdk create api \
        --group metrics \
        --version v1 \
        --kind Testmetrics \
        --generate-role
  3. Edit the roles/testmetrics/tasks/main.yml file and use the osdk_metrics module to create custom metrics for your Operator project:

    Example 5.33. Example roles/testmetrics/tasks/main.yml file

    ---
    # tasks file for Memcached
    - name: start k8sstatus
      k8s:
        definition:
          kind: Deployment
          apiVersion: apps/v1
          metadata:
            name: '{{ ansible_operator_meta.name }}-memcached'
            namespace: '{{ ansible_operator_meta.namespace }}'
          spec:
            replicas: "{{size}}"
            selector:
              matchLabels:
                app: memcached
            template:
              metadata:
                labels:
                  app: memcached
              spec:
                containers:
                - name: memcached
                  command:
                  - memcached
                  - -m=64
                  - -o
                  - modern
                  - -v
                  image: "docker.io/memcached:1.4.36-alpine"
                  ports:
                    - containerPort: 11211
    
    - osdk_metric:
        name: my_thing_counter
        description: This metric counts things
        counter: {}
    
    - osdk_metric:
        name: my_counter_metric
        description: Add 3.14 to the counter
        counter:
          increment: yes
    
    - osdk_metric:
        name: my_gauge_metric
        description: Create my gauge and set it to 2.
        gauge:
          set: 2
    
    - osdk_metric:
        name: my_histogram_metric
        description: Observe my histogram
        histogram:
          observe: 2
    
    - osdk_metric:
        name: my_summary_metric
        description: Observe my summary
        summary:
          observe: 2

Verification

  1. Run your Operator on a cluster. For example, to use the "run as a deployment" method:

    1. Build the Operator image and push it to a registry:

      $ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
    2. Install the Operator on a cluster:

      $ make install
    3. Deploy the Operator:

      $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
  2. Create a Testmetrics custom resource (CR):

    1. Define the CR spec:

      Example 5.34. Example config/samples/metrics_v1_testmetrics.yaml file

      apiVersion: metrics.testmetrics.com/v1
      kind: Testmetrics
      metadata:
        name: testmetrics-sample
      spec:
        size: 1
    2. Create the object:

      $ oc create -f config/samples/metrics_v1_testmetrics.yaml
  3. Get the pod details:

    $ oc get pods

    Example output

    NAME                                    READY   STATUS    RESTARTS   AGE
    ansiblemetrics-controller-manager-<id>  2/2     Running   0          149m
    testmetrics-sample-memcached-<id>       1/1     Running   0          147m

  4. Get the endpoint details:

    $ oc get ep

    Example output

    NAME                                                ENDPOINTS          AGE
    ansiblemetrics-controller-manager-metrics-service   10.129.2.70:8443   150m

  5. Request a custom metrics token:

    $ token=`oc create token prometheus-k8s -n openshift-monitoring`
  6. Check the metrics values:

    1. Check the my_counter_metric value:

      $ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
      tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep  my_counter

      Example output

      HELP my_counter_metric Add 3.14 to the counter
      TYPE my_counter_metric counter
      my_counter_metric 2

    2. Check the my_gauge_metric value:

      $ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
      tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep  gauge

      Example output

      HELP my_gauge_metric Create my gauge and set it to 2.

    3. Check the my_histogram_metric and my_summary_metric values:

      $ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza
      tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep  Observe

      Example output

      HELP my_histogram_metric Observe my histogram
      HELP my_summary_metric Observe my summary

5.15. Configuring leader election

During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.

There are two different leader election implementations to choose from, each with its own trade-off:

Leader-for-life
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, you can specify node.kubernetes.io/unreachable and node.kubernetes.io/not-ready tolerations on the leader pod and use the tolerationSeconds value to dictate how long it takes for the leader pod to be deleted from the node and step down. These tolerations are added to the pod by default on admission with a tolerationSeconds value of 5 minutes. See the Leader-for-life Go documentation for more.
Leader-with-lease
The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.

By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.15.1. Operator leader election examples

The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.

5.15.1.1. Leader-for-life election

With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the config map named memcached-operator-lock:

import (
  ...
  "github.com/operator-framework/operator-sdk/pkg/leader"
)

func main() {
  ...
  err = leader.Become(context.TODO(), "memcached-operator-lock")
  if err != nil {
    log.Error(err, "Failed to retry for leader lock")
    os.Exit(1)
  }
  ...
}

If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the name of the Operator.

5.15.1.2. Leader-with-lease election

The Leader-with-lease implementation can be enabled using the Manager Options for leader election:

import (
  ...
  "sigs.k8s.io/controller-runtime/pkg/manager"
)

func main() {
  ...
  opts := manager.Options{
    ...
    LeaderElection: true,
    LeaderElectionID: "memcached-operator-lock"
  }
  mgr, err := manager.New(cfg, opts)
  ...
}

When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator to create the config map for leader election. You can override this namespace by setting the LeaderElectionNamespace option for the Manager.

5.16. Configuring Operator projects for multi-platform support

Operator projects that support multiple architectures and operating systems, or platforms, can run on more Kubernetes and OpenShift Container Platform clusters than Operator projects that support only a single platform. Example architectures include amd64, arm64, ppc64le, and s390x. Example operating systems include Linux and Windows.

Perform the following actions to ensure your Operator project can run on multiple OpenShift Container Platform platforms:

  • Build a manifest list that specifies the platforms that your Operator supports.
  • Set your Operator’s node affinity to support multi-architecture compute machines.
Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.16.1. Building a manifest list of the platforms your Operator supports

You can use the make docker-buildx command to build a manifest list of the platforms supported by your Operator and operands. A manifest list references specific image manifests for one or more architectures. An image manifest specifies the platforms that an image supports.

For more information, see OpenContainers Image Index Spec or Image Manifest v2, Schema 2.

Important

If your Operator project deploys an application or other workload resources, the following procedure assumes the application’s multi-platform images are built during the application release process.

Prerequisites

  • An Operator project built using the Operator SDK version 1.31.0 or later
  • Docker installed

Procedure

  1. Inspect the image manifests of your Operator and operands to find which platforms your Operator project can support. Run the following command to inspect an image manifest:

    $ docker manifest inspect <image_manifest> 1
    1
    Specifies an image manifest, such as redhat/ubi9:latest.

    The platforms that your Operator and operands mutually support determine the platform compatibility of your Operator project.

    Example output

    {
        "manifests": [
            {
                "digest": "sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda",
                "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                "platform": {
                    "architecture": "amd64",
                    "os": "linux"
                },
                "size": 528
            },
    ...
            {
                "digest": "sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281",
                "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                "platform": {
                    "architecture": "arm64",
                    "os": "linux",
                    "variant": "v8"
                },
                "size": 528
            },
    ...

  2. If the previous command does not output platform information, then the specified base image might be a single image instead of an image manifest. You can find which architectures an image supports by running the following command:

    $ docker inspect <image>
  3. For Go-based Operator projects, the Operator SDK explicitly references the amd64 architecture in your project’s Dockerfile. Make the following change to your Dockerfile to set an environment variable to the value specified by the platform flag:

    Example Dockerfile

    FROM golang:1.19 as builder
    ARG TARGETOS
    ARG TARGETARCH
    ...
    RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager main.go 1

    1
    Change the GOARCH field from amd64 to $TARGETARCH.
  4. Your Operator project’s makefile defines the PLATFORMS environment variable. If your Operator’s images do not support all of the platforms set by default, edit the variable to specify the supported platforms. The following example defines the supported platforms as linux/arm64 and linux/amd64:

    Example makefile

    # ...
    PLATFORMS ?= linux/arm64,linux/amd64 1
    .PHONY: docker-buildx
    # ...

    1
    The following PLATFORMS values are set by default: linux/arm64, linux/amd64, linux/s390x, and linux/ppc64le.

    When you run the make docker buildx command to generate a manifest list, the Operator SDK creates an image manifest for each of the platforms specified by the PLATFORMS variable.

  5. Run the following command from your Operator project directory to build your manager image. Running the command builds a manager image with multi-platform support and pushes the manifest list to your registry.

    $ make docker-buildx \
      IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>

5.16.2. About node affinity rules for multi-architecture compute machines and Operator workloads

You must set node affinity rules to ensure your Operator workloads can run on multi-architecture compute machines. Node affinity is a set of rules used by the scheduler to define a pod’s placement. Setting node affinity rules ensures your Operator’s workloads are scheduled to compute machines with compatible architectures.

If your Operator performs better on particular architectures, you can set preferred node affinity rules to schedule pods to machines with the specified architectures.

For more information, see "About clusters with multi-architecture compute machines" and "Controlling pod placement on nodes using node affinity rules".

5.16.2.1. Using required node affinity rules to support multi-architecture compute machines for Operator projects

If you want your Operator to support multi-architecture compute machines, you must define your Operator’s required node affinity rules.

Prerequisites

  • An Operator project created or maintained with Operator SDK 1.31.0 or later.
  • A manifest list defining the platforms your Operator supports.

Procedure

  1. Search your Operator project for Kubernetes manifests that define pod spec and pod template spec objects.

    Important

    Because object type names are not declared in YAML files, look for the mandatory containers field in your Kubernetes manifests. The containers field is required when specifying both pod spec and pod template spec objects.

    You must set node affinity rules in all Kubernetes manifests that define a pod spec or pod template spec, including objects such as Pod, Deployment, DaemonSet, and StatefulSet.

    Example Kubernetes manifest

    apiVersion: v1
    kind: Pod
    metadata:
      name: s1
    spec:
      containers:
        - name: <container_name>
          image: docker.io/<org>/<image_name>

  2. Set the required node affinity rules in the Kubernetes manifests that define pod spec and pod template spec objects, similar to the following example:

    Example Kubernetes manifest

    apiVersion: v1
    kind: Pod
    metadata:
      name: s1
    spec:
      containers:
        - name: <container_name>
          image: docker.io/<org>/<image_name>
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 1
            nodeSelectorTerms: 2
            - matchExpressions: 3
              - key: kubernetes.io/arch 4
                operator: In
                values:
                - amd64
                - arm64
                - ppc64le
                - s390x
              - key: kubernetes.io/os 5
                operator: In
                values:
                    - linux

    1
    Defines a required rule.
    2
    If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied.
    3
    If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions are satisfied.
    4
    Specifies the architectures defined in the manifest list.
    5
    Specifies the operating systems defined in the manifest list.
  3. Go-based Operator projects that use dynamically created workloads might embed pod spec and pod template spec objects in the Operator’s logic.

    If your project embeds pod spec or pod template spec objects in the Operator’s logic, edit your Operator’s logic similar to the following example. The following example shows how to update a PodSpec object by using the Go API:

    Template: corev1.PodTemplateSpec{
        ...
        Spec: corev1.PodSpec{
            Affinity: &corev1.Affinity{
                NodeAffinity: &corev1.NodeAffinity{
                    RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
                        NodeSelectorTerms: []corev1.NodeSelectorTerm{
                            {
                                MatchExpressions: []corev1.NodeSelectorRequirement{
                                    {
                                        Key:      "kubernetes.io/arch",
                                        Operator: "In",
                                        Values:   []string{"amd64","arm64","ppc64le","s390x"},
                                    },
                                    {
                                        Key:      "kubernetes.io/os",
                                        Operator: "In",
                                        Values:   []string{"linux"},
                                    },
                                },
                            },
                        },
                    },
                },
            },
            SecurityContext: &corev1.PodSecurityContext{
                ...
            },
            Containers: []corev1.Container{{
                ...
            }},
        },

    where:

    RequiredDuringSchedulingIgnoredDuringExecution
    Defines a required rule.
    NodeSelectorTerms
    If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied.
    MatchExpressions
    If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions are satisfied.
    kubernetes.io/arch
    Specifies the architectures defined in the manifest list.
    kubernetes.io/os
    Specifies the operating systems defined in the manifest list.
Warning

If you do not set node affinity rules and a container is scheduled to a compute machine with an incompatible architecture, the pod fails and triggers one of the following events:

CrashLoopBackOff
Occurs when an image manifest’s entry point fails to run and an exec format error message is printed in the logs.
ImagePullBackOff
Occurs when a manifest list does not include a manifest for the architecture where a pod is scheduled or the node affinity terms are set to the wrong values.

5.16.2.2. Using preferred node affinity rules to configure support for multi-architecture compute machines for Operator projects

If your Operator performs better on particular architectures, you can configure preferred node affinity rules to schedule pods to nodes to the specified architectures.

Prerequisites

  • An Operator project created or maintained with Operator SDK 1.31.0 or later.
  • A manifest list defining the platforms your Operator supports.
  • Required node affinity rules are set for your Operator project.

Procedure

  1. Search your Operator project for Kubernetes manifests that define pod spec and pod template spec objects.

    Example Kubernetes manifest

    apiVersion: v1
    kind: Pod
    metadata:
      name: s1
    spec:
      containers:
        - name: <container_name>
          image: docker.io/<org>/<image_name>

  2. Set your Operator’s preferred node affinity rules in the Kubernetes manifests that define pod spec and pod template spec objects, similar to the following example:

    Example Kubernetes manifest

    apiVersion: v1
    kind: Pod
    metadata:
      name: s1
    spec:
      containers:
        - name: <container_name>
          image: docker.io/<org>/<image_name>
      affinity:
          nodeAffinity:
            preferredDuringSchedulingIgnoredDuringExecution: 1
              - preference:
                matchExpressions: 2
                  - key: kubernetes.io/arch 3
                    operator: In 4
                    values:
                    - amd64
                    - arm64
                weight: 90 5

    1
    Defines a preferred rule.
    2
    If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions are satisfied.
    3
    Specifies the architectures defined in the manifest list.
    4
    Specifies an operator. The Operator can be In, NotIn, Exists, or DoesNotExist. For example, use the value of In to require the label to be in the node.
    5
    Specifies a weight for the node, valid values are 1-100. The node with highest weight is preferred.

5.16.3. Next steps

5.17. Object pruning utility for Go-based Operators

The operator-lib pruning utility lets Go-based Operators clean up, or prune, objects when they are no longer needed. Operator authors can also use the utility to create custom hooks and strategies.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.17.1. About the operator-lib pruning utility

Objects, such as jobs or pods, are created as a normal part of the Operator life cycle. If the cluster administrator or the Operator does not remove these object, they can stay in the cluster and consume resources.

Previously, the following options were available for pruning unnecessary objects:

  • Operator authors had to create a unique pruning solution for their Operators.
  • Cluster administrators had to clean up objects on their own.

The operator-lib pruning utility removes objects from a Kubernetes cluster for a given namespace. The library was added in version 0.9.0 of the operator-lib library as part of the Operator Framework.

5.17.2. Pruning utility configuration

The operator-lib pruning utility is written in Go and includes common pruning strategies for Go-based Operators.

Example configuration

cfg = Config{
        log:           logf.Log.WithName("prune"),
        DryRun:        false,
        Clientset:     client,
        LabelSelector: "app=<operator_name>",
        Resources: []schema.GroupVersionKind{
                {Group: "", Version: "", Kind: PodKind},
        },
        Namespaces: []string{"<operator_namespace>"},
        Strategy: StrategyConfig{
                Mode:            MaxCountStrategy,
                MaxCountSetting: 1,
        },
        PreDeleteHook: myhook,
}

The pruning utility configuration file defines pruning actions by using the following fields:

Configuration fieldDescription

log

Logger used to handle library log messages.

DryRun

Boolean that determines whether resources should be removed. If set to true, the utility runs but does not to remove resources.

Clientset

Client-go Kubernetes ClientSet used for Kubernetes API calls.

LabelSelector

Kubernetes label selector expression used to find resources to prune.

Resources

Kubernetes resource kinds. PodKind and JobKind are currently supported.

Namespaces

List of Kubernetes namespaces to search for resources.

Strategy

Pruning strategy to run.

Strategy.Mode

MaxCountStrategy, MaxAgeStrategy, or CustomStrategy are currently supported.

Strategy.MaxCountSetting

Integer value for MaxCountStrategy that specifies how many resources should remain after the pruning utility runs.

Strategy.MaxAgeSetting

Go time.Duration string value, such as 48h, that specifies the age of resources to prune.

Strategy.CustomSettings

Go map of values that can be passed into a custom strategy function.

PreDeleteHook

Optional: Go function to call before pruning a resource.

CustomStrategy

Optional: Go function that implements a custom pruning strategy.

Pruning execution

You can call the pruning action by running the execute function on the pruning configuration.

err := cfg.Execute(ctx)

You can also call a pruning action by using a cron package or by calling the pruning utility with a triggering event.

5.18. Migrating package manifest projects to bundle format

Support for the legacy package manifest format for Operators is removed in OpenShift Container Platform 4.8 and later. If you have an Operator project that was initially created using the package manifest format, you can use the Operator SDK to migrate the project to the bundle format. The bundle format is the preferred packaging format for Operator Lifecycle Manager (OLM) starting in OpenShift Container Platform 4.6.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

5.18.1. About packaging format migration

The Operator SDK pkgman-to-bundle command helps in migrating Operator Lifecycle Manager (OLM) package manifests to bundles. The command takes an input package manifest directory and generates bundles for each of the versions of manifests present in the input directory. You can also then build bundle images for each of the generated bundles.

For example, consider the following packagemanifests/ directory for a project in the package manifest format:

Example package manifest format layout

packagemanifests/
└── etcd
    ├── 0.0.1
    │   ├── etcdcluster.crd.yaml
    │   └── etcdoperator.clusterserviceversion.yaml
    ├── 0.0.2
    │   ├── etcdbackup.crd.yaml
    │   ├── etcdcluster.crd.yaml
    │   ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
    │   └── etcdrestore.crd.yaml
    └── etcd.package.yaml

After running the migration, the following bundles are generated in the bundle/ directory:

Example bundle format layout

bundle/
├── bundle-0.0.1
│   ├── bundle.Dockerfile
│   ├── manifests
│   │   ├── etcdcluster.crd.yaml
│   │   ├── etcdoperator.clusterserviceversion.yaml
│   ├── metadata
│   │   └── annotations.yaml
│   └── tests
│       └── scorecard
│           └── config.yaml
└── bundle-0.0.2
    ├── bundle.Dockerfile
    ├── manifests
    │   ├── etcdbackup.crd.yaml
    │   ├── etcdcluster.crd.yaml
    │   ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
    │   ├── etcdrestore.crd.yaml
    ├── metadata
    │   └── annotations.yaml
    └── tests
        └── scorecard
            └── config.yaml

Based on this generated layout, bundle images for both of the bundles are also built with the following names:

  • quay.io/example/etcd:0.0.1
  • quay.io/example/etcd:0.0.2

5.18.2. Migrating a package manifest project to bundle format

Operator authors can use the Operator SDK to migrate a package manifest format Operator project to a bundle format project.

Prerequisites

  • Operator SDK CLI installed
  • Operator project initially generated using the Operator SDK in package manifest format

Procedure

  • Use the Operator SDK to migrate your package manifest project to the bundle format and generate bundle images:

    $ operator-sdk pkgman-to-bundle <package_manifests_dir> \ 1
        [--output-dir <directory>] \ 2
        --image-tag-base <image_name_base> 3
    1
    Specify the location of the package manifests directory for the project, such as packagemanifests/ or manifests/.
    2
    Optional: By default, the generated bundles are written locally to disk to the bundle/ directory. You can use the --output-dir flag to specify an alternative location.
    3
    Set the --image-tag-base flag to provide the base of the image name, such as quay.io/example/etcd, that will be used for the bundles. Provide the name without a tag, because the tag for the images will be set according to the bundle version. For example, the full bundle image names are generated in the format <image_name_base>:<bundle_version>.

Verification

  • Verify that the generated bundle image runs successfully:

    $ operator-sdk run bundle <bundle_image_name>:<tag>

    Example output

    INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4
    INFO[0025] Created CatalogSource: etcd-catalog
    INFO[0026] OperatorGroup "operator-sdk-og" created
    INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub
    INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub
    INFO[0031] Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to reach 'Succeeded' phase
    INFO[0032]   Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to appear
    INFO[0048]   Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Pending
    INFO[0049]   Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Installing
    INFO[0064]   Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Succeeded
    INFO[0065] OLM has successfully installed "etcdoperator.v0.9.4"

5.19. Operator SDK CLI reference

The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier.

Important

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.16 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects
  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

Operator SDK CLI syntax

$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]

Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

5.19.1. bundle

The operator-sdk bundle command manages Operator bundle metadata.

5.19.1.1. validate

The bundle validate subcommand validates an Operator bundle.

Table 5.23. bundle validate flags
FlagDescription

-h, --help

Help output for the bundle validate subcommand.

--index-builder (string)

Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker, which is the default, podman, or none.

--list-optional

List all optional validators available. When set, no validators are run.

--select-optional (string)

Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators.

5.19.2. cleanup

The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command.

Table 5.24. cleanup flags
FlagDescription

-h, --help

Help output for the run bundle subcommand.

--kubeconfig (string)

Path to the kubeconfig file to use for CLI requests.

-n, --namespace (string)

If present, namespace in which to run the CLI request.

--timeout <duration>

Time to wait for the command to complete before failing. The default value is 2m0s.

5.19.3. completion

The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.

Table 5.25. completion subcommands
SubcommandDescription

bash

Generate bash completions.

zsh

Generate zsh completions.

Table 5.26. completion flags
FlagDescription

-h, --help

Usage help output.

For example:

$ operator-sdk completion bash

Example output

# bash completion for operator-sdk                         -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh

5.19.4. create

The operator-sdk create command is used to create, or scaffold, a Kubernetes API.

5.19.4.1. api

The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command.

Table 5.27. create api flags
FlagDescription

-h, --help

Help output for the run bundle subcommand.

5.19.5. generate

The operator-sdk generate command invokes a specific generator to generate code or manifests.

5.19.5.1. bundle

The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project.

Note

Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence.

Table 5.28. generate bundle flags
FlagDescription

--channels (string)

Comma-separated list of channels to which the bundle belongs. The default value is alpha.

--crds-dir (string)

Root directory for CustomResoureDefinition manifests.

--default-channel (string)

The default channel for the bundle.

--deploy-dir (string)

Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag.

-h, --help

Help for generate bundle

--input-dir (string)

Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory.

--kustomize-dir (string)

Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests.

--manifests

Generate bundle manifests.

--metadata

Generate bundle metadata and Dockerfile.

--output-dir (string)

Directory to write the bundle to.

--overwrite

Overwrite the bundle metadata and Dockerfile if they exist. The default value is true.

--package (string)

Package name for the bundle.

-q, --quiet

Run in quiet mode.

--stdout

Write bundle manifest to standard out.

--version (string)

Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator.

Additional resources

  • See Bundling an Operator for a full procedure that includes using the make bundle command to call the generate bundle subcommand.

5.19.5.2. kustomize

The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator.

5.19.5.2.1. manifests

The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag.

Table 5.29. generate kustomize manifests flags
FlagDescription

--apis-dir (string)

Root directory for API type definitions.

-h, --help

Help for generate kustomize manifests.

--input-dir (string)

Directory containing existing Kustomize files.

--interactive

When set to false, if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata.

--output-dir (string)

Directory where to write Kustomize files.

--package (string)

Package name.

-q, --quiet

Run in quiet mode.

5.19.6. init

The operator-sdk init command initializes an Operator project and generates, or scaffolds, a default project directory layout for the given plugin.

This command writes the following files:

  • Boilerplate license file
  • PROJECT file with the domain and repository
  • Makefile to build the project
  • go.mod file with project dependencies
  • kustomization.yaml file for customizing manifests
  • Patch file for customizing images for manager manifests
  • Patch file for enabling Prometheus metrics
  • main.go file to run
Table 5.30. init flags
FlagDescription

--help, -h

Help output for the init command.

--plugins (string)

Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1, go.kubebuilder.io/v2, go.kubebuilder.io/v3, and helm.sdk.operatorframework.io/v1.

--project-version

Project version. Available values are 2 and 3-alpha, which is the default.

5.19.7. run

The operator-sdk run command provides options that can launch the Operator in various environments.

5.19.7.1. bundle

The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM).

Table 5.31. run bundle flags
FlagDescription

--index-image (string)

Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest.

--install-mode <install_mode_value>

Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace.

--timeout <duration>

Install timeout. The default value is 2m0s.

--kubeconfig (string)

Path to the kubeconfig file to use for CLI requests.

-n, --namespace (string)

If present, namespace in which to run the CLI request.

--security-context-config <security_context>

Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy. The default value is legacy. [1]

-h, --help

Help output for the run bundle subcommand.

  1. The restricted security context is not compatible with the default namespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".

5.19.7.2. bundle-upgrade

The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM).

Table 5.32. run bundle-upgrade flags
FlagDescription

--timeout <duration>

Upgrade timeout. The default value is 2m0s.

--kubeconfig (string)

Path to the kubeconfig file to use for CLI requests.

-n, --namespace (string)

If present, namespace in which to run the CLI request.

--security-context-config <security_context>

Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy. The default value is legacy. [1]

-h, --help

Help output for the run bundle subcommand.

  1. The restricted security context is not compatible with the default namespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".

5.19.8. scorecard

The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely.

Table 5.33. scorecard flags
FlagDescription

-c, --config (string)

Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml.

-h, --help

Help output for the scorecard command.

--kubeconfig (string)

Path to kubeconfig file.

-L, --list

List which tests are available to run.

-n, --namespace (string)

Namespace in which to run the test images.

-o, --output (string)

Output format for results. Available values are text, which is the default, and json.

--pod-security <security_context>

Option to run scorecard with the specified security context. Allowed values include restricted and legacy. The default value is legacy. [1]

-l, --selector (string)

Label selector to determine which tests are run.

-s, --service-account (string)

Service account to use for tests. The default value is default.

-x, --skip-cleanup

Disable resource cleanup after tests are run.

-w, --wait-time <duration>

Seconds to wait for tests to complete, for example 35s. The default value is 30s.

  1. The restricted security context is not compatible with the default namespace. To configure your Operator’s pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission".
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.