Chapter 5. Developing Operators


5.1. About the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.

Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.

The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.

Why use the Operator SDK?

The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.

The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:

  • High-level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to quickly bootstrap a new project
  • Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
  • Extensions to cover common Operator use cases
  • Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed

Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.

Note

OpenShift Container Platform 4.6 supports Operator SDK v0.19.4.

5.1.1. What are Operators?

For an overview about basic Operator concepts and terminology, see Understanding Operators.

5.1.2. Development workflow

The Operator SDK provides the following workflow to develop a new Operator:

  1. Create an Operator project by using the Operator SDK command-line interface (CLI).
  2. Define new resource APIs by adding custom resource definitions (CRDs).
  3. Specify resources to watch by using the Operator SDK API.
  4. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
  5. Use the Operator SDK CLI to build and generate the Operator deployment manifests.

Figure 5.1. Operator SDK workflow

osdk workflow

At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.

5.1.3. Additional resources

5.2. Installing the Operator SDK CLI

The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.

OpenShift Container Platform 4.6 supports Operator SDK v0.19.4, which can be installed from upstream sources.

Note

Starting in OpenShift Container Platform 4.7, the Operator SDK is fully supported and available from official Red Hat product sources. See OpenShift Container Platform 4.7 release notes for more information.

5.2.1. Installing the Operator SDK CLI from from GitHub releases

You can download and install a pre-built release binary of the Operator SDK CLI from the project on GitHub.

Prerequisites

  • Go v1.13+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    $ RELEASE_VERSION=v0.19.4
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided .asc file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding .asc file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the public key of the maintainer on your workstation, you will get the following error:

      Example output with error

      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key

      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

5.2.2. Installing the Operator SDK CLI from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

5.2.3. Compiling and installing the Operator SDK CLI from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • Git
  • Go v1.13+
  • docker v17.03+, podman v1.9.3+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ git clone https://github.com/operator-framework/operator-sdk
  2. Change to the directory for the cloned repository:

    $ cd operator-sdk
  3. Check out the v0.19.4 release:

    $ git checkout tags/v0.19.4 -b v0.19.4
  4. Update dependencies:

    $ make tidy
  5. Compile and install the SDK CLI:

    $ make install

    This installs the CLI binary operator-sdk in the $GOPATH/bin/ directory.

  6. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

5.3. Creating Go-based Operators

Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.

Note

Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators.

5.3.1. Creating a Go-based Operator using the Operator SDK

The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. The SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code needed for many common management capabilities, such as metering or monitoring.

This procedure walks through an example of creating a simple Memcached Operator using tools and libraries provided by the SDK.

Prerequisites

  • Operator SDK v0.19.4 CLI installed on the development workstation
  • Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.8 or above to support the apps/v1beta2 API group), for example OpenShift Container Platform 4.6
  • Access to the cluster using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed

Procedure

  1. Create an Operator project:

    1. Create a directory for the project:

      $ mkdir -p $HOME/projects/memcached-operator
    2. Change to the directory:

      $ cd $HOME/projects/memcached-operator
    3. Activate support for Go modules:

      $ export GO111MODULE=on
    4. Run the operator-sdk init command to initialize the project:

      $ operator-sdk init \
          --domain=example.com \
          --repo=github.com/example-inc/memcached-operator
      Note

      The operator-sdk init command uses the go.kubebuilder.io/v2 plug-in by default.

  2. Update your Operator to use supported images:

    1. In the project root-level Dockerfile, change the default runner image reference from:

      FROM gcr.io/distroless/static:nonroot

      to:

      FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
    2. Depending on the Go project version, your Dockerfile might contain a USER 65532:65532 or USER nonroot:nonroot directive. In either case, remove the line, as it is not required by the supported runner image.
    3. In the config/default/manager_auth_proxy_patch.yaml file, change the image value from:

      gcr.io/kubebuilder/kube-rbac-proxy:<tag>

      to use the supported image:

      registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.6
  3. Update the test target in your Makefile to install dependencies required during later builds by replacing the following lines:

    Example 5.1. Existing test target

    test: generate fmt vet manifests
            go test ./... -coverprofile cover.out

    With the following lines:

    Example 5.2. Updated test target

    ENVTEST_ASSETS_DIR=$(shell pwd)/testbin
    test: manifests generate fmt vet ## Run tests.
    	mkdir -p ${ENVTEST_ASSETS_DIR}
    	test -f ${ENVTEST_ASSETS_DIR}/setup-envtest.sh || curl -sSLo ${ENVTEST_ASSETS_DIR}/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.7.2/hack/setup-envtest.sh
    	source ${ENVTEST_ASSETS_DIR}/setup-envtest.sh; fetch_envtest_tools $(ENVTEST_ASSETS_DIR); setup_envtest_env $(ENVTEST_ASSETS_DIR); go test ./... -coverprofile cover.out
  4. Create a custom resource definition (CRD) API and controller:

    1. Run the following command to create an API with group cache, version v1, and kind Memcached:

      $ operator-sdk create api \
          --group=cache \
          --version=v1 \
          --kind=Memcached
    2. When prompted, enter y for creating both the resource and controller:

      Create Resource [y/n]
      y
      Create Controller [y/n]
      y

      Example output

      Writing scaffold for you to edit...
      api/v1/memcached_types.go
      controllers/memcached_controller.go
      ...

      This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go.

    3. Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status:

      // MemcachedSpec defines the desired state of Memcached
      type MemcachedSpec struct {
      	// +kubebuilder:validation:Minimum=0
      	// Size is the size of the memcached deployment
      	Size int32 `json:"size"`
      }
      
      // MemcachedStatus defines the observed state of Memcached
      type MemcachedStatus struct {
      	// Nodes are the names of the memcached pods
      	Nodes []string `json:"nodes"`
      }
    4. Add the +kubebuilder:subresource:status marker to add a status subresource to the CRD manifest:

      // Memcached is the Schema for the memcacheds API
      // +kubebuilder:subresource:status 1
      type Memcached struct {
      	metav1.TypeMeta   `json:",inline"`
      	metav1.ObjectMeta `json:"metadata,omitempty"`
      
      	Spec   MemcachedSpec   `json:"spec,omitempty"`
      	Status MemcachedStatus `json:"status,omitempty"`
      }
      1
      Add this line.

      This enables the controller to update the CR status without changing the rest of the CR object.

    5. Update the generated code for the resource type:

      $ make generate
      Tip

      After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type.

      The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement.

  5. Generate and update CRD manifests:

    $ make manifests

    This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file.

    1. Optional: Add custom validation to your CRD.

      OpenAPI v3.0 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.

      As an Operator author, you can use annotation-like, single-line comments called Kubebuilder markers to configure custom validations for your API. These markers must always have a +kubebuilder:validation prefix. For example, adding an enum-type specification can be done by adding the following marker:

      // +kubebuilder:validation:Enum=Lion;Wolf;Dragon
      type Alias string

      Usage of markers in API code is discussed in the Kubebuilder Generating CRDs and Markers for Config/Code Generation documentation. A full list of OpenAPIv3 validation markers is also available in the Kubebuilder CRD Validation documentation.

      If you add any custom validations, run the following command to update the OpenAPI validation section for the CRD:

      $ make manifests
  6. After creating a new API and controller, you can implement the controller logic. For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation:

    Example 5.3. Example memcached_controller.go

    /*
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
    */
    
    package controllers
    
    import (
    	"context"
    	"reflect"
    
    	"github.com/go-logr/logr"
    	appsv1 "k8s.io/api/apps/v1"
    	corev1 "k8s.io/api/core/v1"
    	"k8s.io/apimachinery/pkg/api/errors"
    	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    	"k8s.io/apimachinery/pkg/runtime"
    	"k8s.io/apimachinery/pkg/types"
    	ctrl "sigs.k8s.io/controller-runtime"
    	"sigs.k8s.io/controller-runtime/pkg/client"
    
    	cachev1 "github.com/example-inc/memcached-operator/api/v1"
    )
    
    // MemcachedReconciler reconciles a Memcached object
    type MemcachedReconciler struct {
    	client.Client
    	Log    logr.Logger
    	Scheme *runtime.Scheme
    }
    
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
    // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
    
    func (r *MemcachedReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
    	ctx := context.Background()
    	log := r.Log.WithValues("memcached", req.NamespacedName)
    
    	// Fetch the Memcached instance
    	memcached := &cachev1.Memcached{}
    	err := r.Get(ctx, req.NamespacedName, memcached)
    	if err != nil {
    		if errors.IsNotFound(err) {
    			// Request object not found, could have been deleted after reconcile request.
    			// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
    			// Return and don't requeue
    			log.Info("Memcached resource not found. Ignoring since object must be deleted")
    			return ctrl.Result{}, nil
    		}
    		// Error reading the object - requeue the request.
    		log.Error(err, "Failed to get Memcached")
    		return ctrl.Result{}, err
    	}
    
    	// Check if the deployment already exists, if not create a new one
    	found := &appsv1.Deployment{}
    	err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
    	if err != nil && errors.IsNotFound(err) {
    		// Define a new deployment
    		dep := r.deploymentForMemcached(memcached)
    		log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
    		err = r.Create(ctx, dep)
    		if err != nil {
    			log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
    			return ctrl.Result{}, err
    		}
    		// Deployment created successfully - return and requeue
    		return ctrl.Result{Requeue: true}, nil
    	} else if err != nil {
    		log.Error(err, "Failed to get Deployment")
    		return ctrl.Result{}, err
    	}
    
    	// Ensure the deployment size is the same as the spec
    	size := memcached.Spec.Size
    	if *found.Spec.Replicas != size {
    		found.Spec.Replicas = &size
    		err = r.Update(ctx, found)
    		if err != nil {
    			log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name)
    			return ctrl.Result{}, err
    		}
    		// Spec updated - return and requeue
    		return ctrl.Result{Requeue: true}, nil
    	}
    
    	// Update the Memcached status with the pod names
    	// List the pods for this memcached's deployment
    	podList := &corev1.PodList{}
    	listOpts := []client.ListOption{
    		client.InNamespace(memcached.Namespace),
    		client.MatchingLabels(labelsForMemcached(memcached.Name)),
    	}
    	if err = r.List(ctx, podList, listOpts...); err != nil {
    		log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
    		return ctrl.Result{}, err
    	}
    	podNames := getPodNames(podList.Items)
    
    	// Update status.Nodes if needed
    	if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
    		memcached.Status.Nodes = podNames
    		err := r.Status().Update(ctx, memcached)
    		if err != nil {
    			log.Error(err, "Failed to update Memcached status")
    			return ctrl.Result{}, err
    		}
    	}
    
    	return ctrl.Result{}, nil
    }
    
    // deploymentForMemcached returns a memcached Deployment object
    func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment {
    	ls := labelsForMemcached(m.Name)
    	replicas := m.Spec.Size
    
    	dep := &appsv1.Deployment{
    		ObjectMeta: metav1.ObjectMeta{
    			Name:      m.Name,
    			Namespace: m.Namespace,
    		},
    		Spec: appsv1.DeploymentSpec{
    			Replicas: &replicas,
    			Selector: &metav1.LabelSelector{
    				MatchLabels: ls,
    			},
    			Template: corev1.PodTemplateSpec{
    				ObjectMeta: metav1.ObjectMeta{
    					Labels: ls,
    				},
    				Spec: corev1.PodSpec{
    					Containers: []corev1.Container{{
    						Image:   "memcached:1.4.36-alpine",
    						Name:    "memcached",
    						Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
    						Ports: []corev1.ContainerPort{{
    							ContainerPort: 11211,
    							Name:          "memcached",
    						}},
    					}},
    				},
    			},
    		},
    	}
    	// Set Memcached instance as the owner and controller
    	ctrl.SetControllerReference(m, dep, r.Scheme)
    	return dep
    }
    
    // labelsForMemcached returns the labels for selecting the resources
    // belonging to the given memcached CR name.
    func labelsForMemcached(name string) map[string]string {
    	return map[string]string{"app": "memcached", "memcached_cr": name}
    }
    
    // getPodNames returns the pod names of the array of pods passed in
    func getPodNames(pods []corev1.Pod) []string {
    	var podNames []string
    	for _, pod := range pods {
    		podNames = append(podNames, pod.Name)
    	}
    	return podNames
    }
    
    func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
    	return ctrl.NewControllerManagedBy(mgr).
    		For(&cachev1.Memcached{}).
    		Owns(&appsv1.Deployment{}).
    		Complete(r)
    }

    The example controller runs the following reconciliation logic for each Memcached CR:

    • Create a Memcached deployment if it does not exist.
    • Ensure that the deployment size is the same as specified by the Memcached CR spec.
    • Update the Memcached CR status with the names of the memcached pods.

    The next two sub-steps inspect how the controller watches resources and how the reconcile loop is triggered. You can skip these steps to go directly to building and running the Operator.

    1. Inspect the controller implementation at the controllers/memcached_controller.go file to see how the controller watches resources.

      The SetupWithManager() function specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller:

      Example 5.4. SetupWithManager() function

      import (
      	...
      	appsv1 "k8s.io/api/apps/v1"
      	...
      )
      
      func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
      	return ctrl.NewControllerManagedBy(mgr).
      		For(&cachev1.Memcached{}).
      		Owns(&appsv1.Deployment{}).
      		Complete(r)
      }

      NewControllerManagedBy() provides a controller builder that allows various controller configurations.

      For(&cachev1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached type, the reconcile loop is sent a reconcile Request argument, which consists of a namespace and name key, for that Memcached object.

      Owns(&appsv1.Deployment{}) specifies the Deployment type as the secondary resource to watch. For each Deployment type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached object for which the deployment was created.

    2. Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument, which is a namespace and name key used to find the primary resource object, Memcached, from the cache:

      Example 5.5. Reconcile loop

      import (
      	ctrl "sigs.k8s.io/controller-runtime"
      
      	cachev1 "github.com/example-inc/memcached-operator/api/v1"
      	...
      )
      
      func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
        // Lookup the Memcached instance for this reconcile request
        memcached := &cachev1.Memcached{}
        err := r.Get(ctx, req.NamespacedName, memcached)
        ...
      }

      Based on the return value of the Reconcile() function, the reconcile Request might be requeued, and the loop might be triggered again:

      Example 5.6. Requeue logic

      // Reconcile successful - don't requeue
      return reconcile.Result{}, nil
      // Reconcile failed due to error - requeue
      return reconcile.Result{}, err
      // Requeue for any reason other than error
      return reconcile.Result{Requeue: true}, nil

      You can set the Result.RequeueAfter to requeue the request after a grace period:

      Example 5.7. Requeue after grace period

      import "time"
      
      // Reconcile for any reason other than an error after 5 seconds
      return ctrl.Result{RequeueAfter: time.Second*5}, nil
      Note

      You can return Result with RequeueAfter set to periodically reconcile a CR.

      For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.

Additional resources

5.3.2. Running the Operator

There are two ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.
  • Run as a deployment on the cluster.

Prerequisites

5.3.2.1. Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This method is useful for development purposes to speed up deployment and testing.

Procedure

  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator as a Go program locally:

    $ make install run

    Example 5.8. Example output

    ...
    2021-01-10T21:09:29.016-0700	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
    2021-01-10T21:09:29.017-0700	INFO	setup	starting manager
    2021-01-10T21:09:29.017-0700	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
    2021-01-10T21:09:29.018-0700	INFO	controller-runtime.manager.controller.memcached	Starting EventSource	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="}
    2021-01-10T21:09:29.218-0700	INFO	controller-runtime.manager.controller.memcached	Starting Controller	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached"}
    2021-01-10T21:09:29.218-0700	INFO	controller-runtime.manager.controller.memcached	Starting workers	{"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}

5.3.2.2. Running as a deployment

After creating your Go-based Operator project, you can build and run your Operator as a deployment inside a cluster.

Procedure

  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
      Note

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Verify that the Operator is running:

    $ oc get deployment -n <project_name>-system

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

5.3.3. Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites

  • Example Memcached Operator, which provides the Memcached CR, installed on a cluster

Procedure

  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project memcached-operator-system
  2. Edit the sample Memcached CR manifest at config/samples/cache_v1_memcached.yaml to contain the following specification:

    apiVersion: cache.example.com/v1
    kind: Memcached
    metadata:
      name: memcached-sample
    ...
    spec:
    ...
      size: 3
  3. Create the CR:

    $ oc apply -f config/samples/cache_v1_memcached.yaml
  4. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments

    Example output

    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    memcached-operator-controller-manager   1/1     1            1           8m
    memcached-sample                        3/3     3            3           1m

  5. Check the pods and CR status to confirm the status is updated with the Memcached pod names.

    1. Check the pods:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      memcached-sample-6fd7c98d8-7dqdr      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-g5k7v      1/1       Running   0          1m
      memcached-sample-6fd7c98d8-m7vn7      1/1       Running   0          1m

    2. Check the CR status:

      $ oc get memcached/memcached-sample -o yaml

      Example output

      apiVersion: cache.example.com/v1
      kind: Memcached
      metadata:
      ...
        name: memcached-sample
      ...
      spec:
        size: 3
      status:
        nodes:
        - memcached-sample-6fd7c98d8-7dqdr
        - memcached-sample-6fd7c98d8-g5k7v
        - memcached-sample-6fd7c98d8-m7vn7

  6. Update the deployment size.

    1. Update config/samples/cache_v1_memcached.yaml file to change the spec.size field in the Memcached CR from 3 to 5:

      $ oc patch memcached memcached-sample \
          -p '{"spec":{"size": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments

      Example output

      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator-controller-manager   1/1     1            1           10m
      memcached-sample                        5/5     5            5           3m

5.3.4. Additional resources

5.4. Creating Ansible-based Operators

This guide outlines Ansible support in the Operator SDK and walks Operator authors through examples building and running Ansible-based Operators with the operator-sdk CLI tool that use Ansible playbooks and modules.

5.4.1. Ansible support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK options for generating an Operator project includes leveraging existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.

5.4.1.1. Custom resource files

Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 5.1. Custom resource fields
FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 5.2. Ansible-based Operator annotations
AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

5.4.1.2. watches.yaml file

A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml.

Table 5.3. watches.yaml file mappings
FieldDescription

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example watches.yaml file

- version: v1alpha1 1
  group: test1.example.com
  kind: Test1
  role: /opt/ansible/roles/Test1

- version: v1alpha1 2
  group: test2.example.com
  kind: Test2
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 3
  group: test3.example.com
  kind: Test3
  playbook: /opt/ansible/test3.yml
  reconcilePeriod: 0
  manageStatus: false

1
Simple example mapping Test1 to the test1 role.
2
Simple example mapping Test2 to a playbook.
3
More complex example for the Test3 kind. Disables re-queuing and managing the CR status in the playbook.
5.4.1.2.1. Advanced options

Advanced features can be enabled by adding them to your watches.yaml file per GVK. They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.

Table 5.4. Advanced watches.yaml file options
FeatureYAML keyDescriptionAnnotation for overrideDefault value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansible.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR status section.

 

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

 

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

 

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example watches.yml file with advanced options

- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

5.4.1.3. Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message: "Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

- debug:
    msg: "name: {{ meta.name }}, {{ meta.namespace }}"

5.4.1.4. Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

5.4.2. Building an Ansible-based Operator using the Operator SDK

This procedure walks through an example of building a simple Memcached Operator powered by Ansible playbooks and modules using tools and libraries provided by the Operator SDK.

Prerequisites

  • Operator SDK v0.19.4 CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.6) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed
  • ansible v2.9.0+
  • ansible-runner v1.1.0+
  • ansible-runner-http v1.0.0+

Procedure

  1. Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

    To create a new Ansible-based, namespace-scoped memcached-operator project and change to the new directory, use the following commands:

    $ operator-sdk new memcached-operator \
        --api-version=cache.example.com/v1alpha1 \
        --kind=Memcached \
        --type=ansible
    $ cd memcached-operator

    This creates the memcached-operator project specifically for watching the Memcached resource with API version example.com/v1apha1 and kind Memcached.

  2. Customize the Operator logic.

    For this example, the memcached-operator executes the following reconciliation logic for each Memcached custom resource (CR):

    • Create a memcached deployment if it does not exist.
    • Ensure that the deployment size is the same as specified by the Memcached CR.

    By default, the memcached-operator watches Memcached resource events as shown in the watches.yaml file and executes the Ansible role Memcached:

    - version: v1alpha1
      group: cache.example.com
      kind: Memcached

    You can optionally customize the following logic in the watches.yaml file:

    1. Specifying a role option configures the Operator to use this specified path when launching ansible-runner with an Ansible role. By default, the operator-sdk new command fills in an absolute path to where your role should go:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        role: /opt/ansible/roles/memcached
    2. Specifying a playbook option in the watches.yaml file configures the Operator to use this specified path when launching ansible-runner with an Ansible playbook:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        playbook: /opt/ansible/playbook.yaml
  3. Build the Memcached Ansible role.

    Modify the generated Ansible role under the roles/memcached/ directory. This Ansible role controls the logic that is executed when a resource is modified.

    1. Define the Memcached spec.

      Defining the spec for an Ansible-based Operator can be done entirely in Ansible. The Ansible Operator passes all key-value pairs listed in the CR spec field along to Ansible as variables. The names of all variables in the spec field are converted to snake case (lowercase with an underscore) by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

      Tip

      You should perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

      In case the user does not set the spec field, set a default by modifying the roles/memcached/defaults/main.yml file:

      size: 1
    2. Define the Memcached deployment.

      With the Memcached spec now defined, you can define what Ansible is actually executed on resource changes. Because this is an Ansible role, the default behavior executes the tasks in the roles/memcached/tasks/main.yml file.

      The goal is for Ansible to create a deployment if it does not exist, which runs the memcached:1.4.36-alpine image. Ansible 2.7+ supports the k8s Ansible module, which this example leverages to control the deployment definition.

      Modify the roles/memcached/tasks/main.yml to match the following:

      - name: start memcached
        k8s:
          definition:
            kind: Deployment
            apiVersion: apps/v1
            metadata:
              name: '{{ meta.name }}-memcached'
              namespace: '{{ meta.namespace }}'
            spec:
              replicas: "{{size}}"
              selector:
                matchLabels:
                  app: memcached
              template:
                metadata:
                  labels:
                    app: memcached
                spec:
                  containers:
                  - name: memcached
                    command:
                    - memcached
                    - -m=64
                    - -o
                    - modern
                    - -v
                    image: "docker.io/memcached:1.4.36-alpine"
                    ports:
                      - containerPort: 11211
      Note

      This example used the size variable to control the number of replicas of the Memcached deployment. This example sets the default to 1, but any user can create a CR that overwrites the default.

  4. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new custom resource definition (CRD) that the Operator will be watching. Deploy the Memcached CRD:

    $ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
  5. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the memcached-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        $ podman push quay.io/example/memcached-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
      3. Deploy the memcached-operator manifests:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      4. Verify that the memcached-operator deployment is up and running:

        $ oc get deployment
        NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        memcached-operator       1         1         1            1           1m
    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      Ensure that Ansible Runner and Ansible Runner HTTP Plug-in are installed or else you will see unexpected errors from Ansible Runner when a CR is created.

      It is also important that the role path referenced in the watches.yaml file exists on your machine. Because normally a container is used where the role is put on disk, the role must be manually copied to the configured Ansible roles path (for example /etc/ansible/roles).

      1. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk run --local --kubeconfig=config
  6. Create a Memcached CR.

    1. Modify the deploy/crds/cache_v1alpha1_memcached_cr.yaml file as shown and create a Memcached CR:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Ensure that the memcached-operator creates the deployment for the CR:

      $ oc get deployment

      Example output

      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m

    3. Check the pods to confirm three replicas were created:

      $ oc get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
  7. Update the size.

    1. Change the spec.size field in the memcached CR from 3 to 4 and apply the change:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployment

      Example output

      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m

  8. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml

5.4.3. Managing application lifecycle using the k8s Ansible module

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the k8s Ansible module. This Ansible module allows a developer to either leverage their existing Kubernetes resource files (written in YAML) or express the lifecycle management in native Ansible.

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the k8s Ansible module. To get started, install the module on your local workstation and test it using a playbook before moving on to using it within an Operator.

5.4.3.1. Installing the k8s Ansible module

To install the k8s Ansible module on your local workstation:

Procedure

  1. Install Ansible 2.9+:

    $ sudo yum install ansible
  2. Install the OpenShift python client package using pip:

    $ sudo pip install openshift
    $ sudo pip install kubernetes

5.4.3.2. Testing the k8s Ansible module locally

Sometimes, it is beneficial for a developer to run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Procedure

  1. Install the community.kubernetes collection:

    $ ansible-galaxy collection install community.kubernetes
  2. Initialize a new Ansible-based Operator project:

    $ operator-sdk new --type ansible \
        --kind Test1 \
        --api-version test1.example.com/v1alpha1 test1-operator

    Example output

    Create test1-operator/tmp/init/galaxy-init.sh
    Create test1-operator/tmp/build/Dockerfile
    Create test1-operator/tmp/build/test-framework/Dockerfile
    Create test1-operator/tmp/build/go-test.sh
    Rendering Ansible Galaxy role [test1-operator/roles/test1]...
    Cleaning up test1-operator/tmp/init
    Create test1-operator/watches.yaml
    Create test1-operator/deploy/rbac.yaml
    Create test1-operator/deploy/crd.yaml
    Create test1-operator/deploy/cr.yaml
    Create test1-operator/deploy/operator.yaml
    Run git init ...
    Initialized empty Git repository in /home/user/go/src/github.com/user/opsdk/test1-operator/.git/
    Run git init done

    $ cd test1-operator
  3. Modify the roles/test1/tasks/main.yml file with the Ansible logic that you want. This example creates and deletes a namespace with the switch of a variable.

    - name: set test namespace to "{{ state }}"
      community.kubernetes.k8s:
        api_version: v1
        kind: Namespace
        state: "{{ state }}"
        name: test
      ignore_errors: true 1
    1
    Setting ignore_errors: true ensures that deleting a nonexistent project does not fail.
  4. Modify the roles/test1/defaults/main.yml file to set state to present by default:

    state: present
  5. Create an Ansible playbook playbook.yml in the top-level directory, which includes the test1 role:

    - hosts: localhost
      roles:
        - test1
  6. Run the playbook:

    $ ansible-playbook playbook.yml

    Example output

     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [test1 : set test namespace to present]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0

  7. Check that the namespace was created:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s

  8. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent

    Example output

     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [test1 : set test namespace to absent]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0

  9. Check that the namespace was deleted:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

5.4.3.3. Testing the k8s Ansible module inside an Operator

After you are familiar with using the k8s Ansible module locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml file.

5.4.3.3.1. Testing an Ansible-based Operator locally

After getting comfortable testing Ansible workflows locally, you can test the logic inside of an Ansible-based Operator running locally.

To do so, use the operator-sdk run --local command from the top-level directory of your Operator project. This command reads from the watches.yaml file and uses the ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s Ansible module does.

Procedure

  1. Because the run --local command reads from the watches.yaml file, there are options available to the Operator author. If role is left alone (by default, /opt/ansible/roles/<name>) you must copy the role over to the /opt/ansible/roles/ directory from the Operator directly.

    This is cumbersome because changes are not reflected from the current directory. Instead, change the role field to point to the current directory and comment out the existing line:

    - version: v1alpha1
      group: test1.example.com
      kind: Test1
      #  role: /opt/ansible/roles/Test1
      role: /home/user/test1-operator/Test1
  2. Create a custom resource definition (CRD) and proper role-based access control (RBAC) definitions for the custom resource (CR) Test1. The operator-sdk command autogenerates these files inside of the deploy/ directory:

    $ oc create -f deploy/crds/test1_v1alpha1_test1_crd.yaml
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
  3. Run the run --local command:

    $ operator-sdk run --local

    Example output

    [...]
    INFO[0000] Starting to serve on 127.0.0.1:8888
    INFO[0000] Watching test1.example.com/v1alpha1, Test1, default

  4. Now that the Operator is watching the resource Test1 for events, the creation of a CR triggers your Ansible role to execute. View the deploy/cr.yaml file:

    apiVersion: "test1.example.com/v1alpha1"
    kind: "Test1"
    metadata:
      name: "example"

    Because the spec field is not set, Ansible is invoked with no extra variables. The next section covers how extra variables are passed from a CR to Ansible. This is why it is important to set reasonable defaults for the Operator.

  5. Create a CR instance of Test1 with the default variable state set to present:

    $ oc create -f deploy/cr.yaml
  6. Check that the namespace test was created:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s

  7. Modify the deploy/cr.yaml file to set the state field to absent:

    apiVersion: "test1.example.com/v1alpha1"
    kind: "Test1"
    metadata:
      name: "example"
    spec:
      state: "absent"
  8. Apply the changes and confirm that the namespace is deleted:

    $ oc apply -f deploy/cr.yaml
    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

5.4.3.3.2. Testing an Ansible-based Operator on a cluster

After getting familiar running Ansible logic inside of an Ansible-based Operator locally, you can test the Operator inside of a pod on a Kubernetes cluster, such as OpenShift Container Platform. Running as a pod on a cluster is preferred for production use.

Procedure

  1. Build the test1-operator image and push it to a registry:

    $ operator-sdk build quay.io/example/test1-operator:v0.0.1
    $ podman push quay.io/example/test1-operator:v0.0.1
  2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file must be modified from the placeholder REPLACE_IMAGE to the previously-built image. To do so, run the following command:

    $ sed -i 's|REPLACE_IMAGE|quay.io/example/test1-operator:v0.0.1|g' deploy/operator.yaml

    If you are performing these steps on macOS, use the following command instead:

    $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/test1-operator:v0.0.1|g' deploy/operator.yaml
  3. Deploy the test1-operator:

    $ oc create -f deploy/crds/test1_v1alpha1_test1_crd.yaml 1
    1
    Only required if the CRD does not exist already.
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
    $ oc create -f deploy/operator.yaml
  4. Verify that the test1-operator is up and running:

    $ oc get deployment

    Example output

    NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    test1-operator       1         1         1            1           1m

  5. You can now view the Ansible logs for the test1-operator:

    $ oc logs deployment/test1-operator

5.4.4. Managing custom resource status using the operator_sdk.util Ansible collection

Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
    - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

Procedure

  1. To track CR status manually from your application, update the watches.yaml file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: Test1
      role: Test1
      manageStatus: false
  2. Use the operator_sdk.util.k8s_status Ansible module to update the subresource. For example, to update with key test1 and value test2, operator_sdk.util can be used as shown:

    - operator_sdk.util.k8s_status:
        api_version: app.example.com/v1
        kind: Test1
        name: "{{ meta.name }}"
        namespace: "{{ meta.namespace }}"
        status:
          test1: test2

    Collections can also be declared in the meta/main.yml for the role, which is included for new scaffolded Ansible Operators:

    collections:
      - operator_sdk.util

    Declaring collections in the role meta allows you to invoke the k8s_status module directly:

    k8s_status:
      <snip>
      status:
        test1: test2

Additional resources

5.4.5. Additional resources

5.5. Creating Helm-based Operators

This guide outlines Helm chart support in the Operator SDK and walks Operator authors through an example of building and running an Nginx Operator with the operator-sdk CLI tool that uses an existing Helm chart.

5.5.1. Helm chart support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.

The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec field of the object is a list of configuration options that are typically described in the Helm values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.

For an example of a simple CR called Tomcat:

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

The replicaCount value, 2 in this case, is propagated into the template of the chart where the following is used:

{{ .Values.replicaCount }}

After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:

$ oc get Tomcats --all-namespaces

There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.

5.5.2. Building a Helm-based Operator using the Operator SDK

This procedure walks through an example of building a simple Nginx Operator powered by a Helm chart using tools and libraries provided by the Operator SDK.

Tip

It is best practice to build a new Operator for each chart. This can allow for more native-behaving Kubernetes APIs (for example, oc get Nginx) and flexibility if you ever want to write a fully-fledged Operator in Go, migrating away from a Helm-based Operator.

Prerequisites

  • Operator SDK v0.19.4 CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.6) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed

Procedure

  1. Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

    To create a new Helm-based, namespace-scoped nginx-operator project, use the following command:

    $ operator-sdk new nginx-operator \
      --api-version=example.com/v1alpha1 \
      --kind=Nginx \
      --type=helm
    $ cd nginx-operator

    This creates the nginx-operator project specifically for watching the Nginx resource with API version example.com/v1apha1 and kind Nginx.

  2. Customize the Operator logic.

    For this example, the nginx-operator executes the following reconciliation logic for each Nginx custom resource (CR):

    • Create an Nginx deployment if it does not exist.
    • Create an Nginx service if it does not exist.
    • Create an Nginx ingress if it is enabled and does not exist.
    • Ensure that the deployment, service, and optional ingress match the desired configuration (for example, replica count, image, service type) as specified by the Nginx CR.

    By default, the nginx-operator watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

    - version: v1alpha1
      group: example.com
      kind: Nginx
      chart: /opt/helm/helm-charts/nginx
    1. Review the Nginx Helm chart.

      When a Helm Operator project is created, the Operator SDK creates an example Helm chart that contains a set of templates for a simple Nginx release.

      For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

      If you are not already familiar with Helm Charts, review the Helm Chart developer documentation.

    2. Understand the Nginx CR spec.

      Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.

      Override these defaults by setting the desired values in the CR spec. You can use the number of replicas as an example:

      1. First, inspect the helm-charts/nginx/values.yaml file to find that the chart has a value called replicaCount and it is set to 1 by default. To have 2 Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

        Update the deploy/crds/example.com_v1alpha1_nginx_cr.yaml file to look like the following:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
      2. Similarly, the default service port is set to 80. To instead use 8080, update the deploy/crds/example.com_v1alpha1_nginx_cr.yaml file again by adding the service port override:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
          service:
            port: 8080

        The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command works.

  3. Deploy the CRD.

    Before running the Operator, Kubernetes must know about the new custom resource definition (CRD) that the Operator will be watching. Deploy the following CRD:

    $ oc create -f deploy/crds/example_v1alpha1_nginx_crd.yaml
  4. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the nginx-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/nginx-operator:v0.0.1
        $ podman push quay.io/example/nginx-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
      3. Deploy the nginx-operator manifests:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      4. Verify that the nginx-operator deployment is up and running:

        $ oc get deployment

        Example output

        NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        nginx-operator       1         1         1            1           1m

    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      It is important that the chart path referenced in the watches.yaml file exists on your machine. By default, the watches.yaml file is scaffolded to work with an Operator image built with the operator-sdk build command. When developing and testing your Operator with the operator-sdk run --local command, the SDK looks in your local file system for this path.

      1. Create a symlink at this location to point to the path of your Helm chart:

        $ sudo mkdir -p /opt/helm/helm-charts
        $ sudo ln -s $PWD/helm-charts/nginx /opt/helm/helm-charts/nginx
      2. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk run --local --kubeconfig=<path_to_config>
  5. Deploy the Nginx CR.

    Apply the Nginx CR that you modified earlier:

    $ oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Ensure that the nginx-operator creates the deployment for the CR:

    $ oc get deployment

    Example output

    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        2         2         2            2           1m

    Check the pods to confirm two replicas were created:

    $ oc get pods

    Example output

    NAME                                                      READY     STATUS    RESTARTS   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-fjcr9   1/1       Running   0          1m
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-ljbzl   1/1       Running   0          1m

    Check that the service port is set to 8080:

    $ oc get service

    Example output

    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        8080/TCP   1m

  6. Update the replicaCount and remove the port.

    Change the spec.replicaCount field from 2 to 3, remove the spec.service field, and apply the change:

    $ cat deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Example output

    apiVersion: "example.com/v1alpha1"
    kind: "Nginx"
    metadata:
      name: "example-nginx"
    spec:
      replicaCount: 3

    $ oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Confirm that the Operator changes the deployment size:

    $ oc get deployment

    Example output

    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        3         3         3            3           1m

    Check that the service port is set to the default 80:

    $ oc get service

    Example output

    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)  AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        80/TCP   1m

  7. Clean up the resources:

    $ oc delete -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/example_v1alpha1_nginx_crd.yaml

5.5.3. Additional resources

5.6. Generating a cluster service version (CSV)

A cluster service version (CSV), defined by a ClusterServiceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.

The Operator SDK includes the generate csv subcommand to generate a CSV for the current Operator project customized using information contained in manually-defined YAML manifests and Operator source files.

A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions. Operator authors can use the --csv-version flag to have their Operator state encapsulated in a CSV with the supplied semantic version:

$ operator-sdk generate csv --csv-version <version>

This action is idempotent and only updates the CSV file when a new version is supplied, or a YAML manifest or source file is changed. Operator authors should not have to directly modify most fields in a CSV manifest. Those that require modification are defined in this guide. For example, the CSV version must be included in metadata.name.

5.6.1. How CSV generation works

The deploy/ directory of an Operator project is the standard location for all manifests required to deploy an Operator. The Operator SDK can use data from manifests in deploy/ to write a cluster service version (CSV).

The following command:

$ operator-sdk generate csv --csv-version <version>

writes a CSV YAML file to the deploy/olm-catalog/ directory by default.

Exactly three types of manifests are required to generate a CSV:

  • operator.yaml
  • *_{crd,cr}.yaml
  • RBAC role files, for example role.yaml

Operator authors may have different versioning requirements for these files and can configure which specific files are included in the deploy/olm-catalog/csv-config.yaml file.

Workflow

Depending on whether an existing CSV is detected, and assuming all configuration defaults are used, the generate csv subcommand either:

  • Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is not found, it creates a ClusterServiceVersion object, referred to here as a cache, and populates fields easily derived from Operator metadata, such as Kubernetes API ObjectMeta.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.

or:

  • Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is found, the CSV YAML file contents are marshaled into a CSV cache.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.
Note

Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved.

5.6.2. CSV composition configuration

Operator authors can configure CSV composition by populating several fields in the deploy/olm-catalog/csv-config.yaml file:

FieldDescription

operator-path (string)

The Operator resource manifest file path. Default: deploy/operator.yaml.

crd-cr-path-list (string(, string)*)

A list of CRD and CR manifest file paths. Default: [deploy/crds/*_{crd,cr}.yaml].

rbac-path-list (string(, string)*)

A list of RBAC role manifest file paths. Default: [deploy/role.yaml].

5.6.3. Manually-defined CSV fields

Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).

Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.

The following tables detail which manually-defined CSV fields are required and which are optional.

Table 5.5. Required
FieldDescription

metadata.name

A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example app-operator.v0.1.1.

metadata.capabilities

The capability level according to the Operator maturity model. Options include Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, and Auto Pilot.

spec.displayName

A public name to identify the Operator.

spec.description

A short description of the functionality of the Operator.

spec.keywords

Keywords describing the Operator.

spec.maintainers

Human or organizational entities maintaining the Operator, with a name and email.

spec.provider

The provider of the Operator (usually an organization), with a name.

spec.labels

Key-value pairs to be used by Operator internals.

spec.version

Semantic version of the Operator, for example 0.1.1.

spec.customresourcedefinitions

Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in deploy/. However, several fields not in the CRD manifest spec require user input:

  • description: description of the CRD.
  • resources: any Kubernetes resources leveraged by the CRD, for example Pod and StatefulSet objects.
  • specDescriptors: UI hints for inputs and outputs of the Operator.
Table 5.6. Optional
FieldDescription

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version. Options include planning, pre-alpha, alpha, beta, stable, mature, inactive, and deprecated.

Further details on what data each field above should hold are found in the CSV spec.

Note

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.

Additional resources

5.6.3.1. Operator metadata annotations

Operator developers can manually define certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub.

The following table lists Operator metadata annotations that can be manually defined using metadata.annotations fields.

Table 5.7. Annotations
FieldDescription

alm-examples

Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize.

operatorframework.io/initialization-resource

Specify a single required custom resource that must be created at the time that the Operator is installed. Must include a template that contains a complete YAML definition.

operatorframework.io/suggested-namespace

Set a suggested namespace where the Operator should be deployed.

operators.openshift.io/infrastructure-features

Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:

  • disconnected: Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require Internet access. All related images required for mirroring are listed by the Operator.
  • cnf: Operator provides a Cloud-native Network Functions (CNF) Kubernetes plug-in.
  • cni: Operator provides a Container Network Interface (CNI) Kubernetes plug-in.
  • csi: Operator provides a Container Storage Interface (CSI) Kubernetes plug-in.
  • fips: Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode.
Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

  • proxy-aware: Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables HTTP_PROXY and HTTPS_PROXY, which Operator Lifecycle Manager (OLM) provides to the Operator automatically when the cluster is configured to use a proxy. Required environment variables are passed down to Operands for managed workloads.

operators.openshift.io/valid-subscription

Free-form array for listing any specific subscriptions that are required to use the Operator. For example, '["3Scale Commercial License", "Red Hat Managed Integration"]'.

operators.operatorframework.io/internal-objects

Hides CRDs in the UI that are not meant for user manipulation.

Example use cases

Operator supports disconnected and proxy-aware

operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'

Operator requires an OpenShift Container Platform license

operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'

Operator requires a 3scale license

operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'

Operator supports disconnected and proxy-aware, and requires an OpenShift Container Platform license

operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'

5.6.4. Generating a CSV

Prerequisites

  • An Operator project generated using the Operator SDK

Procedure

  1. In your Operator project, configure your CSV composition by modifying the deploy/olm-catalog/csv-config.yaml file, if desired.
  2. Generate the CSV:

    $ operator-sdk generate csv --csv-version <version>
  3. In the new CSV generated in the deploy/olm-catalog/ directory, ensure all required, manually-defined fields are set appropriately.

5.6.5. Enabling your Operator for restricted network environments

As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.

Operator requirements for supporting disconnected mode

  • In the cluster service version (CSV) of your Operator:

    • List any related images, or other container images that your Operator might require to perform their functions.
    • Reference all specified images by a digest (SHA) and not by a tag.
  • All dependencies of your Operator must also support running in a disconnected mode.
  • Your Operator must not require any off-cluster resources.

For the CSV requirements, you can make the following changes as the Operator author.

Prerequisites

  • An Operator project with a CSV.

Procedure

  1. Use SHA references to related images in two places in the CSV for your Operator:

    1. Update spec.relatedImages:

      ...
      spec:
        relatedImages: 1
          - name: etcd-operator 2
            image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3
          - name: etcd-image
            image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68
      ...
      1
      Create a relatedImages section and set the list of related images.
      2
      Specify a unique identifier for the image.
      3
      Specify each image by a digest (SHA), not by an image tag.
    2. Update the env section in the deployment when declaring environment variables that inject the image that the Operator should use:

      spec:
        install:
          spec:
            deployments:
            - name: etcd-operator-v3.1.1
              spec:
                replicas: 1
                selector:
                  matchLabels:
                    name: etcd-operator
                strategy:
                  type: Recreate
                template:
                  metadata:
                    labels:
                      name: etcd-operator
                  spec:
                    containers:
                    - args:
                      - /opt/etcd/bin/etcd_operator_run.sh
                      env:
                      - name: WATCH_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.annotations['olm.targetNamespaces']
                      - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE 1
                        value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 2
                      - name: ETCD_LOG_LEVEL
                        value: INFO
                      image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3
                      imagePullPolicy: IfNotPresent
                      livenessProbe:
                        httpGet:
                          path: /healthy
                          port: 8080
                        initialDelaySeconds: 10
                        periodSeconds: 30
                      name: etcd-operator
                      readinessProbe:
                        httpGet:
                          path: /ready
                          port: 8080
                        initialDelaySeconds: 10
                        periodSeconds: 30
                      resources: {}
                    serviceAccountName: etcd-operator
          strategy: deployment
      1
      Inject the images referenced by the Operator by using environment variables.
      2
      Specify each image by a digest (SHA), not by an image tag.
      3
      Also reference the Operator container image by a digest (SHA), not by an image tag.
      Note

      When configuring probes, the timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

  2. Add the disconnected annotation, which indicates that the Operator works in a disconnected environment:

    metadata:
      annotations:
        operators.openshift.io/infrastructure-features: '["disconnected"]'

    Operators can be filtered in OperatorHub by this infrastructure feature.

5.6.6. Enabling your Operator for multiple architectures and operating systems

Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.

If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:

labels:
    operatorframework.io/arch.<arch>: supported 1
    operatorframework.io/os.<os>: supported 2
1
Set <arch> to a supported string.
2
Set <os> to a supported string.
Note

Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.

If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:

labels:
    operatorframework.io/os.linux: supported

If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:

labels:
    operatorframework.io/arch.amd64: supported

If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.

Prerequisites

  • An Operator project with a CSV.
  • To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
  • For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.

Procedure

  • Add a label in the metadata.labels of your CSV for each supported architecture and operating system that your Operator supports:

    labels:
      operatorframework.io/arch.s390x: supported
      operatorframework.io/os.zos: supported
      operatorframework.io/os.linux: supported 1
      operatorframework.io/arch.amd64: supported 2
    1 2
    After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly.

Additional resources

5.6.6.1. Architecture and operating system support for Operators

The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:

Table 5.8. Architectures supported on OpenShift Container Platform
ArchitectureString

AMD64

amd64

64-bit PowerPC little-endian

ppc64le

IBM Z

s390x

Table 5.9. Operating systems supported on OpenShift Container Platform
Operating systemString

Linux

linux

z/OS

zos

Note

Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.

5.6.7. Setting a suggested namespace

Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, in order to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.

As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.

Procedure

  • In your CSV, set the operatorframework.io/suggested-namespace annotation to your suggested namespace:

    metadata:
      annotations:
        operatorframework.io/suggested-namespace: <namespace> 1
    1
    Set your suggested namespace.

5.6.8. Defining webhooks

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

The cluster service version (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:

  • Admission webhooks (validating and mutating)
  • Conversion webhooks

Procedure

  • Add a webhookdefinitions section to the spec section of the CSV of your Operator and include any webhook definitions using a type of ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook. The following example contains all three types of webhooks:

    CSV containing webhooks

      apiVersion: operators.coreos.com/v1alpha1
      kind: ClusterServiceVersion
      metadata:
        name: webhook-operator.v0.0.1
      spec:
        customresourcedefinitions:
          owned:
          - kind: WebhookTest
            name: webhooktests.webhook.operators.coreos.io 1
            version: v1
        install:
          spec:
            deployments:
            - name: webhook-operator-webhook
              ...
              ...
              ...
          strategy: deployment
        installModes:
        - supported: false
          type: OwnNamespace
        - supported: false
          type: SingleNamespace
        - supported: false
          type: MultiNamespace
        - supported: true
          type: AllNamespaces
        webhookdefinitions:
        - type: ValidatingAdmissionWebhook 2
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: vwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest
        - type: MutatingAdmissionWebhook 3
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: mwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest
        - type: ConversionWebhook 4
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          generateName: cwebhooktest.kb.io
          sideEffects: None
          webhookPath: /convert
          conversionCRDs:
          - webhooktests.webhook.operators.coreos.io 5
    ...

    1
    The CRDs targeted by the conversion webhook must exist here.
    2
    A validating admission webhook.
    3
    A mutating admission webhook.
    4
    A conversion webhook.
    5
    The spec.PreserveUnknownFields property of each CRD must be set to false or nil.

5.6.8.1. Webhook considerations for OLM

When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:

  • The type field must be set to either ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook, or the CSV will be placed in a failed phase.
  • The CSV must contain a deployment whose name is equivalent to the value supplied in the deploymentName field of the webhookdefinition.

When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.

Certificate authority constraints

OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:

  • The TLS certificate file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.crt.
  • The TLS key file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.key.
Admission webhook rules constraints

To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:

  • Requests that target all groups
  • Requests that target the operators.coreos.com group
  • Requests that target the ValidatingWebhookConfigurations or MutatingWebhookConfigurations resources
Conversion webhook constraints

OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:

  • CSVs featuring a conversion webhook can only support the AllNamespaces install mode.
  • The CRD targeted by the conversion webhook must have its spec.preserveUnknownFields field set to false or nil.
  • The conversion webhook defined in the CSV must target an owned CRD.
  • There can only be one conversion webhook on the entire cluster for a given CRD.

5.6.9. Understanding your custom resource definitions (CRDs)

There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.

5.6.9.1. Owned CRDs

The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.

Table 5.10. Owned CRD fields
FieldDescriptionRequired/optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

Kind

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.
  • StatusDescriptors: A reference to fields in the status block of an object.
  • ActionDescriptors: A reference to actions that can be performed on an object.

All descriptors accept the following fields:

  • DisplayName: A human readable name for the Spec, Status, or Action.
  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.
  • Path: A dot-delimited path of the field on the object that this descriptor describes.
  • X-Descriptors: Used to determine which "capabilities" this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OpenShift Container Platform.

Also see the openshift/console project for more information on Descriptors in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:

Example owned CRD

      - displayName: MongoDB Standalone
        group: mongodb.com
        kind: MongoDbStandalone
        name: mongodbstandalones.mongodb.com
        resources:
          - kind: Service
            name: ''
            version: v1
          - kind: StatefulSet
            name: ''
            version: v1beta2
          - kind: Pod
            name: ''
            version: v1
          - kind: ConfigMap
            name: ''
            version: v1
        specDescriptors:
          - description: Credentials for Ops Manager or Cloud Manager.
            displayName: Credentials
            path: credentials
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
          - description: Project this deployment belongs to.
            displayName: Project
            path: project
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
          - description: MongoDB version to be installed.
            displayName: Version
            path: version
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:label'
        statusDescriptors:
          - description: The status of each of the pods for the MongoDB cluster.
            displayName: Pod Status
            path: pods
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
        version: v1
        description: >-
          MongoDB Deployment consisting of only one host. No replication of
          data.

5.6.9.2. Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.

Table 5.11. Required CRD fields
FieldDescriptionRequired/optional

Name

The full name of the CRD you require.

Required

Version

The version of that object API.

Required

Kind

The Kubernetes object kind.

Required

DisplayName

A human readable version of the CRD.

Required

Description

A summary of how the component fits in your larger architecture.

Required

Example required CRD

    required:
    - name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.

5.6.9.3. CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.
  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.6.9.3.1. Adding a new CRD version

Procedure

To add a new version of a CRD to your Operator:

  1. Add a new entry in the CRD resource under the versions section of your CSV.

    For example, if the current CRD has a version v1alpha1 and you want to add a new version v1beta1 and mark it as the new storage version, add a new entry for v1beta1:

    versions:
      - name: v1alpha1
        served: true
        storage: false
      - name: v1beta1 1
        served: true
        storage: true
    1
    New entry.
  2. Ensure the referencing version of the CRD in the owned section of your CSV is updated if the CSV intends to use the new version:

    customresourcedefinitions:
      owned:
      - name: cluster.example.com
        version: v1beta1 1
        kind: cluster
        displayName: Cluster
    1
    Update the version.
  3. Push the updated CRD and CSV to your bundle.
5.6.9.3.2. Deprecating or removing a CRD version

Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.

Procedure

To deprecate and remove a specific version of a CRD:

  1. Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:

    versions:
      - name: v1alpha1
        served: false 1
        storage: true
    1
    Set to false.
  2. Switch the storage version to a serving version if the version to be deprecated is currently the storage version. For example:

    versions:
      - name: v1alpha1
        served: false
        storage: false 1
      - name: v1beta1
        served: true
        storage: true 2
    1 2
    Update the storage fields accordingly.
    Note

    In order to remove a specific version that is or was the storage version from a CRD, that version must be removed from the storedVersion in the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.

  3. Upgrade the CRD with the above changes.
  4. In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:

    versions:
      - name: v1beta1
        served: true
        storage: true
  5. Ensure the referencing CRD version in the owned section of your CSV is updated accordingly if that version is removed from the CRD.

5.6.9.4. CRD templates

Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

metadata:
  annotations:
    alm-examples: >-
      [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

5.6.9.5. Hiding internal objects

It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.

As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.

Procedure

  1. Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec block of your CR, if applicable to your Operator.
  2. Add the operators.operatorframework.io/internal-objects annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:

    Internal object annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' 1
    ...

    1
    Set any internal CRDs as an array of strings.

5.6.9.6. Initializing required custom resources

An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.

As an Operator developer, you can specify a single required custom resource that must be created at the time that the Operator is installed by adding the operatorframework.io/initialization-resource annotation to the cluster service version (CSV). The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.

If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.

Procedure

  • Add the operatorframework.io/initialization-resource annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of a StorageCluster resource and provides a full YAML definition:

    Initialization resource annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operatorframework.io/initialization-resource: |-
            {
                "apiVersion": "ocs.openshift.io/v1",
                "kind": "StorageCluster",
                "metadata": {
                    "name": "example-storagecluster"
                },
                "spec": {
                    "manageNodes": false,
                    "monPVCTemplate": {
                        "spec": {
                            "accessModes": [
                                "ReadWriteOnce"
                            ],
                            "resources": {
                                "requests": {
                                    "storage": "10Gi"
                                }
                            },
                            "storageClassName": "gp2"
                        }
                    },
                    "storageDeviceSets": [
                        {
                            "count": 3,
                            "dataPVCTemplate": {
                                "spec": {
                                    "accessModes": [
                                        "ReadWriteOnce"
                                    ],
                                    "resources": {
                                        "requests": {
                                            "storage": "1Ti"
                                        }
                                    },
                                    "storageClassName": "gp2",
                                    "volumeMode": "Block"
                                }
                            },
                            "name": "example-deviceset",
                            "placement": {},
                            "portable": true,
                            "resources": {}
                        }
                    ]
                }
            }
    ...

5.6.10. Understanding your API services

As with CRDs, there are two types of API services that your Operator may use: owned and required.

5.6.10.1. Owned API services

When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.

An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 5.12. Owned API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

Name

The plural name for the API service provided.

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the InstallStrategy of your CSV for a Deployment spec with a matching name, and if not found, does not transition the CSV to the "Install Ready" phase.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

Resources

Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

5.6.10.1.1. API service resource creation

Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:

  • Service pod selectors are copied from the CSV deployment matching the DeploymentName field of the API service description.
  • A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.6.10.1.2. API service serving certificates

OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.

The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.

If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.

5.6.10.2. Required API services

OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.

Table 5.13. Required API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

5.7. Working with bundle images

You can use the Operator SDK to package Operators using the Bundle Format.

5.7.1. Building a bundle image

You can build, push, and validate an Operator bundle image using the Operator SDK.

Prerequisites

  • Operator SDK version 0.19.4
  • podman version 1.9.3+
  • An Operator project generated using the Operator SDK
  • Access to a registry that supports Docker v2-2

    Important

    The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.

Procedure

  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
      Note

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Update your Makefile by setting the IMG URL to your Operator image name and tag that you pushed:

    $ # Image URL to use all building/pushing image targets
    IMG ?= <registry>/<user>/<operator_image_name>:<tag>

    This value is used for subsequent operations.

  3. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object
    • A bundle metadata directory named bundle/metadata
    • All custom resource definitions (CRDs) in a config/crd directory
    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  4. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>

5.7.2. Additional resources

5.8. Validating Operators using the scorecard

Operator authors should validate that their Operator is packaged correctly and free of syntax errors. As an Operator author, you can use the Operator SDK scorecard tool to validate your Operator packaging and run tests.

Note

OpenShift Container Platform 4.6 supports Operator SDK v0.19.4.

5.8.1. About the scorecard tool

To validate an Operator, the scorecard tool provided by the Operator SDK begins by creating all resources required by any related custom resources (CRs) and the Operator. The scorecard then creates a proxy container in the deployment of the Operator which is used to record calls to the API server and run some of the tests. The tests performed also examine some of the parameters in the CRs.

5.8.2. Scorecard configuration

The scorecard tool uses a configuration file that allows you to configure internal plug-ins, as well as several global configuration options.

5.8.2.1. Configuration file

The default location for the scorecard tool configuration is the <project_dir>/.osdk-scorecard.*. The following is an example of a YAML-formatted configuration file:

Scorecard configuration file

scorecard:
  output: json
  plugins:
    - basic: 1
        cr-manifest:
          - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml"
          - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml"
    - olm: 2
        cr-manifest:
          - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml"
          - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml"
        csv-path: "deploy/olm-catalog/memcached-operator/0.0.3/memcached-operator.v0.0.3.clusterserviceversion.yaml"

1
basic tests configured to test two custom resources (CRs).
2
olm tests configured to test two CRs.

Configuration methods for global options take the following priority, highest to lowest:

Command arguments (if available) configuration file default

The configuration file must be in YAML format. As the configuration file might be extended to allow configuration of all operator-sdk subcommands in the future, the scorecard configuration must be under a scorecard subsection.

Note

Configuration file support is provided by the viper package. For more info on how viper configuration works, see the README.

5.8.2.2. Command arguments

While most of the scorecard tool configuration is done using a configuration file, you can also use the following arguments:

Table 5.14. Scorecard tool arguments
FlagTypeDescription

--bundle, -b

string

The path to a bundle directory used for the bundle validation test.

--config

string

The path to the scorecard configuration file. The default is <project_dir>/.osdk-scorecard. The file type and extension must be .yaml. If a configuration file is not provided or found at the default location, the scorecard exits with an error.

--output, -o

string

Output format. Valid options are text and json. The default format is text, which is designed to be a human readable format. The json format uses the JSON schema output format used for plug-ins defined later.

--kubeconfig, -o

string

The path to the kubeconfig file. It sets the kubeconfig for internal plug-ins.

--version

string

The version of scorecard to run. The default and only valid option is v1alpha2.

--selector, -l

string

The label selector to filter tests on.

--list, -L

bool

If true, only print the test names that would be run based on selector filtering.

5.8.2.3. Configuration file options

The scorecard configuration file provides the following options:

Table 5.15. Scorecard configuration file options
OptionTypeDescription

bundle

string

Equivalent of the --bundle flag. Operator Lifecycle Manager (OLM) bundle directory path, when specified, runs bundle validation.

output

string

Equivalent of the --output flag. If this option is defined by both the configuration file and the flag, the flag value takes priority.

kubeconfig

string

Equivalent of the --kubeconfig flag. If this option is defined by both the configuration file and the flag, the flag value takes priority.

plugins

array

An array of plug-in names.

5.8.2.3.1. Basic and OLM plug-ins

The scorecard supports the internal basic and olm plug-ins, which are configured by a plugins section in the configuration file.

Table 5.16. Plug-in options
OptionTypeDescription

cr-manifest

[]string

The path(s) for CRs being tested. Required if olm-deployed is unset or false.

csv-path

string

The path to the cluster service version (CSV) for the Operator. Required for OLM tests or if olm-deployed is set to true.

olm-deployed

bool

Indicates that the CSV and relevant CRDs have been deployed onto the cluster by OLM.

kubeconfig

string

The path to the kubeconfig file. If both the global kubeconfig and this field are set, this field is used for the plug-in.

namespace

string

The namespace to run the plug-ins in. If unset, the default specified by the kubeconfig file is used.

init-timeout

int

Time in seconds until a timeout during initialization of the Operator.

crds-dir

string

The path to the directory containing CRDs that must be deployed to the cluster.

namespaced-manifest

string

The manifest file with all resources that run within a namespace. By default, the scorecard combines the service_account.yaml, role.yaml, role_binding.yaml, and operator.yaml files from the deploy directory into a temporary manifest to use as the namespaced manifest.

global-manifest

string

The manifest containing required resources that run globally (not namespaced). By default, the scorecard combines all CRDs in the crds-dir directory into a temporary manifest to use as the global manifest.

Note

Currently, using the scorecard with a CSV does not permit multiple CR manifests to be set through the CLI, configuration file, or CSV annotations. You must tear down your Operator in the cluster, re-deploy, and re-run the scorecard for each CR that is tested.

Additional resources

  • You can either set cr-manifest or your CSV metadata.annotations['alm-examples'] to provide CRs to the scorecard, but not both. See CRD templates for details.

5.8.3. Tests performed

By default, the scorecard tool has a set of internal tests it can run available across two internal plug-ins. If multiple CRs are specified for a plug-in, the test environment is fully cleaned up after each CR so that each CR gets a clean testing environment.

Each test has a short name that uniquely identifies the test. This is useful when selecting a specific test or tests to run. For example:

$ operator-sdk scorecard -o text --selector=test=checkspectest
$ operator-sdk scorecard -o text --selector='test in (checkspectest,checkstatustest)'

5.8.3.1. Basic plug-in

The following basic Operator tests are available from the basic plug-in:

Table 5.17. basic plug-in tests
TestDescriptionShort name

Spec Block Exists

This test checks the custom resources (CRs) created in the cluster to make sure that all CRs have a spec block. This test has a maximum score of 1.

checkspectest

Status Block Exists

This test checks the CRs created in the cluster to make sure that all CRs have a status block. This test has a maximum score of 1.

checkstatustest

Writing Into CRs Has An Effect

This test reads the scorecard proxy logs to verify that the Operator is making PUT or POST, or both, requests to the API server, indicating that it is modifying resources. This test has a maximum score of 1.

writingintocrshaseffecttest

5.8.3.2. OLM plug-in

The following Operator Lifecycle Manager (OLM) integration tests are available from the olm plug-in:

Table 5.18. olm plug-in tests
TestDescriptionShort name

OLM Bundle Validation

This test validates the OLM bundle manifests found in the bundle directory as specified by the bundle flag. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library.

bundlevalidationtest

Provided APIs Have Validation

This test verifies that the CRDs for the provided CRs contain a validation section and that there is validation for each spec and status field detected in the CR. This test has a maximum score equal to the number of CRs provided by the cr-manifest option.

crdshavevalidationtest

Owned CRDs Have Resources Listed

This test makes sure that the CRDs for each CR provided by the cr-manifest option have a resources subsection in the owned CRDs section of the CSV. If the test detects used resources that are not listed in the resources section, it lists them in the suggestions at the end of the test. This test has a maximum score equal to the number of CRs provided by the cr-manifest option.

crdshaveresourcestest

Spec Fields With Descriptors

This test verifies that every field in the spec sections of custom resources have a corresponding descriptor listed in the CSV. This test has a maximum score equal to the total number of fields in the spec sections of each custom resource passed in by the cr-manifest option.

specdescriptorstest

Status Fields With Descriptors

This test verifies that every field in the status sections of custom resources have a corresponding descriptor listed in the CSV. This test has a maximum score equal to the total number of fields in the status sections of each custom resource passed in by the cr-manifest option.

statusdescriptorstest

Additional resources

5.8.4. Running the scorecard

Prerequisites

The following prerequisites for the Operator project are checked by the scorecard tool:

  • Access to a cluster running Kubernetes 1.11.3 or later.
  • If you want to use the scorecard to check the integration of your Operator project with Operator Lifecycle Manager (OLM), then a cluster service version (CSV) file is also required. This is a requirement when the olm-deployed option is used.
  • For Operators that were not generated using the Operator SDK (non-SDK Operators):

    • Resource manifests for installing and configuring the Operator and custom resources (CRs).
    • Configuration getter that supports reading from the KUBECONFIG environment variable, such as the clientcmd or controller-runtime configuration getters. This is required for the scorecard proxy to work correctly.

Procedure

  1. Define a .osdk-scorecard.yaml configuration file in your Operator project.
  2. Create the namespace defined in the RBAC files (role_binding).
  3. Run the scorecard from the root directory of your Operator project:

    $ operator-sdk scorecard

    The scorecard return code is 1 if any of the executed texts did not pass and 0 if all selected tests passed.

5.8.5. Running the scorecard with an OLM-managed Operator

The scorecard can be run using a cluster service version (CSV), providing a way to test cluster-ready and non-Operator SDK Operators.

Procedure

  1. The scorecard requires a proxy container in the deployment pod of the Operator to read Operator logs. A few modifications to your CSV and creation of one extra object are required to run the proxy before deploying your Operator with Operator Lifecycle Manager (OLM).

    This step can be performed manually or automated using bash functions. Choose one of the following methods.

    • Manual method:

      1. Create a proxy server secret containing a local kubeconfig file`.

        1. Generate a user name using the namespaced owner reference of the scorecard proxy.

          $ echo '{"apiVersion":"","kind":"","name":"scorecard","uid":"","Namespace":"'<namespace>'"}' | base64 -w 0 1
          1
          Replace <namespace> with the namespace your Operator will deploy in.
        2. Write a Config manifest scorecard-config.yaml using the following template, replacing <username> with the base64 user name generated in the previous step:

          apiVersion: v1
          kind: Config
          clusters:
          - cluster:
              insecure-skip-tls-verify: true
              server: http://<username>@localhost:8889
            name: proxy-server
          contexts:
          - context:
              cluster: proxy-server
              user: admin/proxy-server
            name: <namespace>/proxy-server
          current-context: <namespace>/proxy-server
          preferences: {}
          users:
          - name: admin/proxy-server
            user:
              username: <username>
              password: unused
        3. Encode the Config as base64:

          $ cat scorecard-config.yaml | base64 -w 0
        4. Create a Secret manifest scorecard-secret.yaml:

          apiVersion: v1
          kind: Secret
          metadata:
            name: scorecard-kubeconfig
            namespace: <namespace> 1
          data:
            kubeconfig: <kubeconfig_base64> 2
          1
          Replace <namespace> with the namespace your Operator will deploy in.
          2
          Replace <kubeconfig_base64> with the Config encoded as base64.
        5. Apply the secret:

          $ oc apply -f scorecard-secret.yaml
        6. Insert a volume referring to the secret into the deployment for the Operator:

          spec:
            install:
              spec:
                deployments:
                - name: memcached-operator
                  spec:
                    ...
                    template:
                      ...
                      spec:
                        containers:
                        ...
                        volumes:
                        - name: scorecard-kubeconfig 1
                          secret:
                            secretName: scorecard-kubeconfig
                            items:
                            - key: kubeconfig
                              path: config
          1
          Scorecard kubeconfig volume.
      2. Insert a volume mount and KUBECONFIG environment variable into each container in the deployment of your Operator:

        spec:
          install:
            spec:
              deployments:
              - name: memcached-operator
                spec:
                  ...
                  template:
                    ...
                    spec:
                      containers:
                      - name: container1
                        ...
                        volumeMounts:
                        - name: scorecard-kubeconfig 1
                          mountPath: /scorecard-secret
                        env:
                        - name: KUBECONFIG 2
                          value: /scorecard-secret/config
                      - name: container2 3
                        ...
        1
        Scorecard kubeconfig volume mount.
        2
        Scorecard kubeconfig environment variable.
        3
        Repeat the same for this and all other containers.
      3. Insert the scorecard proxy container into the deployment of your Operator:

        spec:
          install:
            spec:
              deployments:
              - name: memcached-operator
                spec:
                  ...
                  template:
                    ...
                    spec:
                      containers:
                      ...
                      - name: scorecard-proxy 1
                        command:
                        - scorecard-proxy
                        env:
                        - name: WATCH_NAMESPACE
                          valueFrom:
                            fieldRef:
                              apiVersion: v1
                              fieldPath: metadata.namespace
                        image: quay.io/operator-framework/scorecard-proxy:master
                        imagePullPolicy: Always
                        ports:
                        - name: proxy
                          containerPort: 8889
        1
        Scorecard proxy container.
    • Automated method:

      The community-operators repository has several bash functions that can perform the previous steps in the procedure for you.

      1. Run the following curl command:

        $ curl -Lo csv-manifest-modifiers.sh \
            https://raw.githubusercontent.com/operator-framework/community-operators/master/scripts/lib/file
      2. Source the csv-manifest-modifiers.sh file:

        $ . ./csv-manifest-modifiers.sh
      3. Create the kubeconfig secret file:

        $ create_kubeconfig_secret_file scorecard-secret.yaml "<namespace>" 1
        1
        Replace <namespace> with the namespace your Operator will deploy in.
      4. Apply the secret:

        $ oc apply -f scorecard-secret.yaml
      5. Insert the kubeconfig volume:

        $ insert_kubeconfig_volume "<csv_file>" 1
        1
        Replace <csv_file> with the path to your CSV manifest.
      6. Insert the kubeconfig secret mount:

        $ insert_kubeconfig_secret_mount "<csv_file>"
      7. Insert the proxy container:

        $ insert_proxy_container "<csv_file>" "quay.io/operator-framework/scorecard-proxy:master"
  2. After inserting the proxy container, follow the steps in the Getting started with the Operator SDK guide to bundle your CSV and custom resource definitions (CRDs) and deploy your Operator on OLM.
  3. After your Operator has been deployed on OLM, define a .osdk-scorecard.yaml configuration file in your Operator project and ensure both the csv-path: <csv_manifest_path> and olm-deployed options are set.
  4. Run the scorecard with both the csv-path: <csv_manifest_path> and olm-deployed options set in your scorecard configuration file:

    $ operator-sdk scorecard

5.9. Configuring built-in monitoring with Prometheus

This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.

5.9.1. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

5.9.2. Metrics helper

In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:

func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)

These metrics are inherited from the controller-runtime library API. By default, the metrics are served on 0.0.0.0:8383/metrics.

A Service object is created with the metrics port exposed, which can be then accessed by Prometheus. The Service object is garbage collected when the leader pod’s root owner is deleted.

The following example is present in the cmd/manager/main.go file in all Operators generated using the Operator SDK:

import(
    "github.com/operator-framework/operator-sdk/pkg/metrics"
    "machine.openshift.io/controller-runtime/pkg/manager"
)

var (
    // Change the below variables to serve metrics on a different host or port.
    metricsHost       = "0.0.0.0" 1
    metricsPort int32 = 8383 2
)
...
func main() {
    ...
    // Pass metrics address to controller-runtime manager
    mgr, err := manager.New(cfg, manager.Options{
        Namespace:          namespace,
        MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort),
    })

    ...
    // Create Service object to expose the metrics port.
    _, err = metrics.ExposeMetricsPort(ctx, metricsPort)
    if err != nil {
        // handle error
        log.Info(err.Error())
    }
    ...
}
1
The host that the metrics are exposed on.
2
The port that the metrics are exposed on.

5.9.2.1. Modifying the metrics port

Operator authors can modify the port that metrics are exposed on.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • In the cmd/manager/main.go file of the generated Operator, change the value of metricsPort in the following line:

    var metricsPort int32 = 8383

5.9.3. Service monitors

A ServiceMonitor is a custom resource provided by the Prometheus Operator that discovers the Endpoints in Service objects and configures Prometheus to monitor those pods.

In Go-based Operators generated using the Operator SDK, the GenerateServiceMonitor() helper function can take a Service object and generate a ServiceMonitor object based on it.

Additional resources

5.9.3.1. Creating service monitors

Operator authors can add service target discovery of created monitoring services using the metrics.CreateServiceMonitor() helper function, which accepts the newly created service.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • Add the metrics.CreateServiceMonitor() helper function to your Operator code:

    import(
        "k8s.io/api/core/v1"
        "github.com/operator-framework/operator-sdk/pkg/metrics"
        "machine.openshift.io/controller-runtime/pkg/client/config"
    )
    func main() {
    
        ...
        // Populate below with the Service(s) for which you want to create ServiceMonitors.
        services := []*v1.Service{}
        // Create one ServiceMonitor per application per namespace.
        // Change the below value to name of the Namespace you want the ServiceMonitor to be created in.
        ns := "default"
        // restConfig is used for talking to the Kubernetes apiserver
        restConfig := config.GetConfig()
    
        // Pass the Service(s) to the helper function, which in turn returns the array of ServiceMonitor objects.
        serviceMonitors, err := metrics.CreateServiceMonitors(restConfig, ns, services)
        if err != nil {
            // Handle errors here.
        }
        ...
    }

5.10. Configuring leader election

During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.

There are two different leader election implementations to choose from, each with its own trade-off:

Leader-for-life
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, the pod-eviction-timeout dictates long how it takes for the leader pod to be deleted from the node and step down, with a default of 5m. See the Leader-for-life Go documentation for more.
Leader-with-lease
The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.

By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.

5.10.1. Operator leader election examples

The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.

5.10.1.1. Leader-for-life election

With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the config map named memcached-operator-lock:

import (
  ...
  "github.com/operator-framework/operator-sdk/pkg/leader"
)

func main() {
  ...
  err = leader.Become(context.TODO(), "memcached-operator-lock")
  if err != nil {
    log.Error(err, "Failed to retry for leader lock")
    os.Exit(1)
  }
  ...
}

If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the name of the Operator.

5.10.1.2. Leader-with-lease election

The Leader-with-lease implementation can be enabled using the Manager Options for leader election:

import (
  ...
  "sigs.k8s.io/controller-runtime/pkg/manager"
)

func main() {
  ...
  opts := manager.Options{
    ...
    LeaderElection: true,
    LeaderElectionID: "memcached-operator-lock"
  }
  mgr, err := manager.New(cfg, opts)
  ...
}

When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator in order to create the config map for leader election. You can override this namespace by setting the LeaderElectionNamespace option for the Manager.

5.11. Operator SDK CLI reference

This guide documents the Operator SDK CLI commands and their syntax:

$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]

5.11.1. alpha

The operator-sdk alpha command is used to run an alpha subcommand.

5.11.1.1. scorecard

The alpha scorecard subcommand runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely.

Table 5.19. scorecard flags
FlagDescription

-c, --config (string)

Path to scorecard configuration file.

-h, --help

Help output for the scorecard command.

--kubeconfig (string)

Path to kubeconfig file.

-L, --list

List which tests are available to run.

-n, --namespace (string)

Namespace in which to run the test images. Default: default.

-o, --output (string)

Output format for results. Available values are text, and json. Default: text.

-l, --selector (string)

Label selector to determine which tests are run.

-s, --service-account (string)

Service account to use for tests. Default: default.

-x, --skip-cleanup

Disable resource cleanup after tests are run.

-w, --wait-time <duration>

Seconds to wait for tests to complete, for example 35s. Default: 30s.

5.11.2. build

The operator-sdk build command compiles the code and builds the executables. After build completes, the image is built using a local container engine. It must then be pushed to a remote registry.

Table 5.20. build arguments
ArgumentDescription

<image>

The container image to be built, for example quay.io/example/operator:v0.0.1.

Table 5.21. build flags
FlagDescription

--go-build-args (string)

Extra Go build arguments.

--image-build-args (string)

Extra image build arguments as one string.

--image-builder (string)

Tool to build OCI images. Available options are: docker, podman, or buildah. Default: docker.

-h, --help

Usage help output.

5.11.3. bundle

The operator-sdk bundle command manages Operator bundle metadata.

5.11.3.1. validate

The bundle validate subcommand validates an Operator bundle.

Table 5.22. bundle validate flags
FlagDescription

-h, --help

Help output for the bundle validate subcommand.

-b, --image-builder (string)

Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker, podman, or none. Default: docker.

5.11.4. cleanup

The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command.

5.11.4.1. packagemanifests

cleanup packagemanifests subcommand destroys an Operator that was deployed with OLM by using the run packagemanifests command.

Table 5.23. packagemanifests arguments
ArgumentsDescription

--include (string)

The file path to Kubernetes resource manifests, such as role and subscription objects. These supplement or override the defaults generated by run or cleanup.

--install-mode (string)

The OperatorGroup is created with the specified InstallMode. Format: InstallModeType[=ns1,ns2[, …​]]`

--kubeconfig (string)

The file path to a Kubernetes configuration file. Default: The location specified by $KUBECONFIG, or to default file rules if not set.

--olm-namespace (string)

The namespace where the OLM is installed. Default: olm.

--operator-namespace (string)

The namespace where the Operator resources are created. The namespace must already exist in the cluster, or be defined in a manifest that is passed to --include.

--operator-version

The version of the Operator to be deployed.

--timeout <duration>

The time to wait for the command to complete before it fails. Default: 2m0s.

-h, --help

Usage help output.

5.11.5. completion

The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.

Table 5.24. completion subcommands
SubcommandDescription

bash

Generate bash completions.

zsh

Generate zsh completions.

Table 5.25. completion flags
FlagDescription

-h, --help

Usage help output.

For example:

$ operator-sdk completion bash

Example output

# bash completion for operator-sdk                         -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh

5.11.6. create

The operator-sdk create command is used to create, or scaffold, a Kubernetes API.

5.11.6.1. api

The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command.

Table 5.26. create api flags
FlagDescription

-h, --help

Help output for the run bundle subcommand.

5.11.6.2. webhook

The create webhook subcommand scaffolds a webhook for an API resource. The subcommand must be run in a project that was initialized with the init command.

Table 5.27. create webhook flags
FlagDescription

-h, --help

Help output for the run bundle subcommand.

5.11.7. generate

The operator-sdk generate command invokes a specific generator to generate code as needed.

5.11.7.1. bundle

The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project.

Note

Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence.

Table 5.28. generate bundle flags
FlagDescription

--channels (string)

Comma-separated list of channels to which the bundle belongs. The default value is alpha.

--crds-dir (string)

Root directory for CustomResoureDefinition manifests.

--default-channel (string)

The default channel for the bundle.

--deploy-dir (string)

Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag.

-h, --help

Help for generate bundle

--input-dir (string)

Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory.

--kustomize-dir (string)

Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests.

--manifests

Generate bundle manifests.

--metadata

Generate bundle metadata and Dockerfile.

--operator-name (string)

Name of the Operator of the bundle.

--output-dir (string)

Directory to write the bundle to.

--overwrite

Overwrite the bundle metadata and Dockerfile if they exist. The default value is true.

-q, --quiet

Run in quiet mode.

--stdout

Write bundle manifest to standard out.

--version (string)

Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator.

5.11.7.2. kustomize

The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator.

5.11.7.2.1. manifests

The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag.

Table 5.29. generate kustomize manifests flags
FlagDescription

--apis-dir (string)

Root directory for API type definitions.

-h, --help

Help for generate kustomize manifests.

--input-dir (string)

Directory containing existing Kustomize files.

--interactive

When set to false, if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata.

--operator-name (string)

Name of the Operator.

--output-dir (string)

Directory where to write Kustomize files.

-q, --quiet

Run in quiet mode.

5.11.7.3. packagemanifests

Running generate packagemanifests subcommand is the first step to publishing your Operator to a catalog, deploying it with OLM or both. This command generates a set of manifests in a versioned directory and a package manifest file for your Operator. You must run generate kustomize manifests first to regenerate Kustomize bases consumed by this command.

Table 5.30. generate packagemanifests flags
FlagDescription

--channel (string)

The channel name for the generated package.

--crds-dir (string)

The root directory for custom resource definition (CRD) manifests.

--default-channel

Use the channel passed to --channel as the default channel of package manifest file.

--deploy-dir (string)

The root directory for Operator manifests such as deployments and RBAC, for example, deploy. This directory is different from that passed to --input-dir.

--from-version (string)

The semantic version of the Operator, from which it is being upgraded.

-h, --help

Help for generate kustomize manifests.

--input-dir (string)

The directory to read existing package manifests from. This directory is the parent of individual versioned package directories, and different from --deploy-dir.

--kustomize-dir (string)

The directory containing Kustomize bases and a kustomization.yaml for operator-framework manifests. Default: config/manifests.

--operator-name (string)

The name of the packaged Operator.

--output-dir (string)

The directory in which to write package manifests.

-q, --quiet

Run in quiet mode.

--stdout

Write package to stdout.

--update-crds

Update custom resource definition (CRD) manifests in this package. Default: true.

-v, --version (string)

The semantic version of the packaged Operator.

5.11.8. init

The operator-sdk init command initializes an Operator project and generates, or scaffolds, a default project directory layout for the given plug-in.

This command writes the following files:

  • Boilerplate license file
  • PROJECT file with the domain and repository
  • Makefile to build the project
  • go.mod file with project dependencies
  • kustomization.yaml file for customizing manifests
  • Patch file for customizing images for manager manifests
  • Patch file for enabling Prometheus metrics
  • main.go file to run
Table 5.31. init flags
FlagDescription

--help, -h

Help output for the init command.

--plugins (string)

Name and optionally version of the plug-in to initialize the project with. Available plug-ins are ansible.sdk.operatorframework.io/v1, go.kubebuilder.io/v2, go.kubebuilder.io/v3, and helm.sdk.operatorframework.io/v1.

--project-version

Project version. Available values are 2 and 3-alpha, which is the default.

5.11.9. new

The operator-sdk new command creates a new Operator application and generates (or scaffolds) a default project directory layout based on the input <project_name>.

Table 5.32. new arguments
ArgumentDescription

<project_name>

Name of the new project.

Table 5.33. new flags
FlagDescription

--api-version

Kubernetes API version in the format <group_name>/<version>, for example app.example.com/v1alpha1.

--crd-version

CRD version to generate. Default: v1.

--generate-playbook

Generate an Ansible playbook skeleton. Used with ansible type.

--helm-chart <string>

Initialize Helm Operator with existing Helm chart: <url>, <repo>/<name>, or local path.

--helm-chart-repo <string>

Chart repository URL for the requested Helm chart.

--helm-chart-version <string>

Specific version of the Helm chart. Used only with the helm type. Default: latest version.

--help, -h

Usage and help output.

--kind <string>

CRD kind, for example AppService.

--skip-generation

Skip generation of deepcopy and OpenAPI code and OpenAPI CRD specs.

--type

Type of Operator to initialize: ansible or helm.

Note

Starting with Operator SDK v0.12.0, the --dep-manager flag and support for dep-based projects have been removed. Go projects are now scaffolded to use Go modules.

Example usage for Go project

$ mkdir $GOPATH/src/github.com/example.com/

$ cd $GOPATH/src/github.com/example.com/
$ operator-sdk new app-operator

Example usage for Ansible project

$ operator-sdk new app-operator \
    --type=ansible \
    --api-version=app.example.com/v1alpha1 \
    --kind=AppService

5.11.10. olm

The operator-sdk olm command manages the Operator Lifecycle Manager (OLM) installation in your cluster.

5.11.10.1. install

olm install subcommand installs OLM in your cluster.

Table 5.34. install arguments
ArgumentDescription

--olm-namespace string

The namespace where OLM is installed. Default: olm.

--timeout <duration>

The time to wait for the command to complete before it fails. Default: 2m0s.

--version string

The version of OLM resources to be installed. Default: latest.

-h, --help

Usage help output.

5.11.10.2. status

olm status subcommand gets the status of the Operator Lifecycle Manager (OLM) installation in your cluster.

Table 5.35. status arguments
ArgumentDescription

--olm-namespace string

The namespace from where OLM is installed. Default: olm.

--timeout <duration>

The time to wait for the command to complete before it fails. Default: 2m0s.

--version string

The version of the OLM that is installed on your cluster. If unset, operator-sdk attempts to auto-discover the version.

-h, --help

Usage help output.

5.11.10.3. uninstall

olm uninstall subcommand uninstalls OLM from your cluster.

Table 5.36. uninstall arguments
ArgumentDescription

--olm-namespace (string)

The namespace from where OLM is to be uninstalled. Default: olm.

--timeout <duration>

The time to wait for the command to complete before it fails. Default: 2m0s.

--version (string)

The version of OLM resources to be uninstalled.

-h, --help

Usage help output.

5.11.11. run

The operator-sdk run command provides options that can launch the Operator in various environments.

5.11.11.1. packagemanifests

run packagemanifests subcommand deploys an Operator’s package manifests with Operator Lifecycle Manager (OLM). The command argument must be set to a valid package manifest root directory, for example, <project_root>/packagemanifests.

Table 5.37. packagemanifests arguments
ArgumentsDescription

--include (string)

The file path to Kubernetes resource manifests, such as role and subscription objects. These supplement or override the defaults generated by run or cleanup.

--install-mode (string)

The OperatorGroup is created with the specified InstallMode. Format: InstallModeType[=ns1,ns2[, …​]].

--kubeconfig (string)

The file path to a Kubernetes configuration file. Default: The location specified by $KUBECONFIG, or to default file rules if the environment variable not set.

--olm-namespace (string)

The namespace where OLM is installed. Default: olm.

--operator-namespace (string)

The namespace where the Operator resources are created. The namespace must already exist in the cluster, or be defined in a manifest that is passed to --include.

--operator-version (string)

The version of the Operator to deploy.

--timeout <duration>

The time to wait for the command to complete before it fails. Default: 2m0s.

-h, --help

Usage help output.

5.12. Appendices

5.12.1. Operator project scaffolding layout

The operator-sdk CLI generates a number of packages for each Operator project. The following sections describes a basic rundown of each generated file and directory.

5.12.1.1. Ansible-based projects

Ansible-based Operator projects generated using the operator-sdk new --type ansible command contain the following directories and files:

File/foldersPurpose

molecule/

Contains the files that are used for testing the Ansible roles.

roles/

Contains the Helm chart used while creating the project.

build/

Contains the Dockerfile and build scripts used to build the Operator.

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment.

requirements.yml

Contains the Ansible content that needs to be installed.

watches.yaml

Contains group, version, kind and role.

5.12.1.2. Helm-based projects

Helm-based Operator projects generated using the operator-sdk new --type helm command contain the following directories and files:

File/foldersPurpose

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment.

helm-charts/<kind>

Contains a Helm chart initialized using the equivalent of the helm create command.

build/

Contains the Dockerfile and build scripts used to build the Operator.

watches.yaml

Contains group, version, kind and Helm chart location.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.