Chapter 5. Developing Operators
5.1. About the Operator SDK
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.
Why use the Operator SDK?
The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.
The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
- Extensions to cover common Operator use cases
- Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0.
5.1.1. What are Operators?
For an overview about basic Operator concepts and terminology, see Understanding Operators.
5.1.2. Development workflow
The Operator SDK provides the following workflow to develop a new Operator:
- Create an Operator project by using the Operator SDK command-line interface (CLI).
- Define new resource APIs by adding custom resource definitions (CRDs).
- Specify resources to watch by using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 5.1. Operator SDK workflow
At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
5.1.3. Additional resources
5.2. Installing the Operator SDK CLI
The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.
Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0.
5.2.1. Installing the Operator SDK CLI on Linux
You can install the OpenShift SDK CLI tool on Linux.
Prerequisites
- Go v1.19+
-
docker
v17.03+,podman
v1.9.3+, orbuildah
v1.7+
Procedure
- Navigate to the OpenShift mirror site.
- From the latest 4.15 directory, download the latest version of the tarball for Linux.
Unpack the archive:
$ tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz
Make the file executable:
$ chmod +x operator-sdk
Move the extracted
operator-sdk
binary to a directory that is on yourPATH
.TipTo check your
PATH
:$ echo $PATH
$ sudo mv ./operator-sdk /usr/local/bin/operator-sdk
Verification
After you install the Operator SDK CLI, verify that it is available:
$ operator-sdk version
Example output
operator-sdk version: "v1.31.0-ocp", ...
5.2.2. Installing the Operator SDK CLI on macOS
You can install the OpenShift SDK CLI tool on macOS.
Prerequisites
- Go v1.19+
-
docker
v17.03+,podman
v1.9.3+, orbuildah
v1.7+
Procedure
-
For the
amd64
andarm64
architectures, navigate to the OpenShift mirror site for theamd64
architecture and OpenShift mirror site for thearm64
architecture respectively. - From the latest 4.15 directory, download the latest version of the tarball for macOS.
Unpack the Operator SDK archive for
amd64
architecture by running the following command:$ tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz
Unpack the Operator SDK archive for
arm64
architecture by running the following command:$ tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz
Make the file executable by running the following command:
$ chmod +x operator-sdk
Move the extracted
operator-sdk
binary to a directory that is on yourPATH
by running the following command:TipCheck your
PATH
by running the following command:$ echo $PATH
$ sudo mv ./operator-sdk /usr/local/bin/operator-sdk
Verification
After you install the Operator SDK CLI, verify that it is available by running the following command::
$ operator-sdk version
Example output
operator-sdk version: "v1.31.0-ocp", ...
5.3. Go-based Operators
5.3.1. Getting started with Operator SDK for Go-based Operators
To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.3.1.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Go 1.21+
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.3.1.2. Creating and deploying Go-based Operators
You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operator
Change into the project directory:
$ cd memcached-operator
Run the
operator-sdk init
command to initialize the project:$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator
The command uses the Go plugin by default.
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind Memcached
Build and push the Operator image.
Use the default
Makefile
targets to build and push your Operator. SetIMG
with a pull spec for your image that uses a registry you can push to:$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
Run the Operator.
Install the CRD:
$ make install
Deploy the project to the cluster. Set
IMG
to the image that you pushed:$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system
Watch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Delete a CR.
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.3.1.3. Next steps
- See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator.
5.3.2. Operator SDK tutorial for Go-based Operators
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdk
CLI tool andcontroller-runtime
library API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.
5.3.2.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Go 1.21+
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.3.2.2. Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Activate support for Go modules:
$ export GO111MODULE=on
Run the
operator-sdk init
command to initialize the project:$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator
NoteThe
operator-sdk init
command uses the Go plugin by default.The
operator-sdk init
command generates ago.mod
file to be used with Go modules. The--repo
flag is required when creating a project outside of$GOPATH/src/
, because generated files require a valid module path.
5.3.2.2.1. PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Go. For example:
domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}
5.3.2.2.2. About the Manager
The main program for the Operator is the main.go
file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace
option empty:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder
function to watch a specific set of namespaces:
var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })
5.3.2.2.3. About multi-group APIs
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
$ operator-sdk edit --multigroup=true
This command updates the PROJECT
file, which should look like the following example:
domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ...
For multi-group projects, the API Go type files are created in the apis/<group>/<version>/
directory, and the controllers are created in the controllers/<group>/
directory. The Dockerfile is then updated accordingly.
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
5.3.2.3. Creating an API and controller
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
cache
, version,v1
, and kindMemcached
:$ operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached
When prompted, enter
y
for creating both the resource and controller:Create Resource [y/n] y Create Controller [y/n] y
Example output
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...
This process generates the Memcached
resource API at api/v1/memcached_types.go
and the controller at controllers/memcached_controller.go
.
5.3.2.3.1. Defining the API
Define the API for the Memcached
custom resource (CR).
Procedure
Modify the Go type definitions at
api/v1/memcached_types.go
to have the followingspec
andstatus
:// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` }
Update the generated code for the resource type:
$ make generate
TipAfter you modify a
*_types.go
file, you must run themake generate
command to update the generated code for that resource type.The above Makefile target invokes the
controller-gen
utility to update theapi/v1/zz_generated.deepcopy.go
file. This ensures your API Go type definitions implement theruntime.Object
interface that all Kind types must implement.
5.3.2.3.2. Generating CRD manifests
After the API is defined with spec
and status
fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.
Procedure
Run the following command to generate and update CRD manifests:
$ make manifests
This Makefile target invokes the
controller-gen
utility to generate the CRD manifests in theconfig/crd/bases/cache.example.com_memcacheds.yaml
file.
5.3.2.3.2.1. About OpenAPI validation
OpenAPIv3 schemas are added to CRD manifests in the spec.validation
block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.
Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation
prefix.
Additional resources
For more details on the usage of markers in API code, see the following Kubebuilder documentation:
- For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation.
5.3.2.4. Implementing the controller
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
controllers/memcached_controller.go
with following example implementation:Example 5.1. Example
memcached_controller.go
/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "reflect" "context" "github.com/go-logr/logr" "k8s.io/apimachinery/pkg/runtime" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" ctrllog "sigs.k8s.io/controller-runtime/pkg/log" cachev1 "github.com/example-inc/memcached-operator/api/v1" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues("memcached", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info("Memcached resource not found. Ignoring since object must be deleted") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, "Failed to get Memcached") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, "Failed to get Deployment") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, "Failed to update Memcached status") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: "memcached:1.4.36-alpine", Name: "memcached", Command: []string{"memcached", "-m=64", "-o", "modern", "-v"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: "memcached", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{"app": "memcached", "memcached_cr": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }
The example controller runs the following reconciliation logic for each
Memcached
custom resource (CR):- Create a Memcached deployment if it does not exist.
-
Ensure that the deployment size is the same as specified by the
Memcached
CR spec. -
Update the
Memcached
CR status with the names of thememcached
pods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.3.2.4.1. Resources watched by the controller
The SetupWithManager()
function in controllers/memcached_controller.go
specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.
import ( ... appsv1 "k8s.io/api/apps/v1" ... ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }
NewControllerManagedBy()
provides a controller builder that allows various controller configurations.
For(&cachev1.Memcached{})
specifies the Memcached
type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached
type, the reconcile loop is sent a reconcile Request
argument, which consists of a namespace and name key, for that Memcached
object.
Owns(&appsv1.Deployment{})
specifies the Deployment
type as the secondary resource to watch. For each Deployment
type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached
object for which the deployment was created.
5.3.2.4.2. Controller configurations
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
MaxConcurrentReconciles
option, which defaults to1
:func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }
- Filter watch events using predicates.
-
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the
EnqueueRequestsFromMapFunc
handler to transform a watch event into an arbitrary set of reconcile requests.
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
5.3.2.4.3. Reconcile loop
Every controller has a reconciler object with a Reconcile()
method that implements the reconcile loop. The reconcile loop is passed the Request
argument, which is a namespace and name key used to find the primary resource object, Memcached
, from the cache:
import ( ctrl "sigs.k8s.io/controller-runtime" cachev1 "github.com/example-inc/memcached-operator/api/v1" ... ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) ... }
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil
You can set the Result.RequeueAfter
to requeue the request after a grace period as well:
import "time" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return Result
with RequeueAfter
set to periodically reconcile a CR.
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
5.3.2.4.4. Permissions and RBAC manifests
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { ... }
The ClusterRole
object manifest at config/rbac/role.yaml
is generated from the previous markers by using the controller-gen
utility whenever the make manifests
command is run.
5.3.2.5. Enabling proxy support
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY
-
HTTPS_PROXY
-
NO_PROXY
This tutorial uses HTTP_PROXY
as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
controllers/memcached_controller.go
file to include the following:Import the
proxy
package from theoperator-lib
library:import ( ... "github.com/operator-framework/operator-lib/proxy" )
Add the
proxy.ReadProxyVarsFromEnv
helper function to the reconcile loop and append the results to the Operand environments:for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) } ...
Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yaml
file:containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.3.2.6. Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.
5.3.2.6.1. Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/config
file and run the Operator locally:$ make install run
Example output
... 2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {"reconciler group": "cache.example.com", "reconciler kind": "Memcached"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}
5.3.2.6.2. Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.3.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.3.2.6.3.1. Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc
) v4.15+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.3.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag> 3
- 1
- The
run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
5.3.2.7. Creating a custom resource
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
Memcached
CR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploy
command:$ oc project memcached-operator-system
Edit the sample
Memcached
CR manifest atconfig/samples/cache_v1_memcached.yaml
to contain the following specification:apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3
Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml
Ensure that the
Memcached
Operator creates the deployment for the sample CR with the correct size:$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
Check the CR status:
$ oc get memcached/memcached-sample -o yaml
Example output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
config/samples/cache_v1_memcached.yaml
file to change thespec.size
field in theMemcached
CR from3
to5
:$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge
Confirm that the Operator changes the deployment size:
$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploy
command to test the Operator, run the following command:$ make undeploy
If you used the
operator-sdk run bundle
command to test the Operator, run the following command:$ operator-sdk cleanup <project_name>
5.3.2.8. Additional resources
- See Project layout for Go-based Operators to learn about the directory structures created by the Operator SDK.
- If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).
5.3.3. Project layout for Go-based Operators
The operator-sdk
CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.3.3.1. Go-based project layout
Go-based Operator projects, the default type, generated using the operator-sdk init
command contain the following files and directories:
File or directory | Purpose |
---|---|
|
Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the |
|
Directory tree that defines the APIs of the CRDs. You must edit the |
|
Controller implementations. Edit the |
| Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates. |
| Targets used to build and deploy your controller. |
| Instructions used by a container engine to build your Operator. |
| Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment. |
5.3.4. Updating Go-based Operator projects for newer Operator SDK versions
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.3.4.1. Updating Go-based Operator projects for Operator SDK 1.31.0
The following procedure updates an existing Go-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s makefile to update the Operator SDK version to 1.31.0, as shown in the following example:
Example makefile
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1
- 1
- Change the version from
1.28.0
to1.31.0
.
5.3.4.2. Additional resources
5.4. Ansible-based Operators
5.4.1. Getting started with Operator SDK for Ansible-based Operators
The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.4.1.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Ansible 2.15.0
- Ansible Runner 2.3.3+
- Ansible Runner HTTP Event Emitter plugin 1.0.0+
- Python 3.9+
- Python Kubernetes client
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.4.1.2. Creating and deploying Ansible-based Operators
You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operator
Change into the project directory:
$ cd memcached-operator
Run the
operator-sdk init
command with theansible
plugin to initialize the project:$ operator-sdk init \ --plugins=ansible \ --domain=example.com
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role 1
- 1
- Generates an Ansible role for the API.
Build and push the Operator image.
Use the default
Makefile
targets to build and push your Operator. SetIMG
with a pull spec for your image that uses a registry you can push to:$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
Run the Operator.
Install the CRD:
$ make install
Deploy the project to the cluster. Set
IMG
to the image that you pushed:$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system
Watch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Example output
... I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612547325.8819902,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612547325.98242,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612547325.9824686,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4} {"level":"info","ts":1612547348.8311093,"logger":"runner","msg":"Ansible-runner exited successfully","job":"4037200794235010051","name":"memcached-sample","namespace":"memcached-operator-system"}
Delete a CR.
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.4.1.3. Next steps
- See Operator SDK tutorial for Ansible-based Operators for a more in-depth walkthrough on building an Ansible-based Operator.
5.4.2. Operator SDK tutorial for Ansible-based Operators
Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:
- Create a Memcached deployment
-
Ensure that the deployment size is the same as specified by the
Memcached
custom resource (CR) spec -
Update the
Memcached
CR status using the status writer with the names of thememcached
pods
This process is accomplished by using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdk
CLI tool andcontroller-runtime
library API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.
5.4.2.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Ansible 2.15.0
- Ansible Runner 2.3.3+
- Ansible Runner HTTP Event Emitter plugin 1.0.0+
- Python 3.9+
- Python Kubernetes client
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.4.2.2. Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Run the
operator-sdk init
command with theansible
plugin to initialize the project:$ operator-sdk init \ --plugins=ansible \ --domain=example.com
5.4.2.2.1. PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Ansible. For example:
domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: "3"
5.4.2.3. Creating an API
Use the Operator SDK CLI to create a Memcached API.
Procedure
Run the following command to create an API with group
cache
, version,v1
, and kindMemcached
:$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role 1
- 1
- Generates an Ansible role for the API.
After creating the API, your Operator project updates with the following structure:
- Memcached CRD
-
Includes a sample
Memcached
resource - Manager
Program that reconciles the state of the cluster to the desired state by using:
- A reconciler, either an Ansible role or playbook
-
A
watches.yaml
file, which connects theMemcached
resource to thememcached
Ansible role
5.4.2.4. Modifying the manager
Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a Memcached
resource is created, updated, or deleted.
Procedure
Update the
roles/memcached/tasks/main.yml
file with the following structure:--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: "{{size}}" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: "docker.io/memcached:1.4.36-alpine" ports: - containerPort: 11211
This
memcached
role ensures amemcached
deployment exist and sets the deployment size.Set default values for variables used in your Ansible role by editing the
roles/memcached/defaults/main.yml
file:--- # defaults file for Memcached size: 1
Update the
Memcached
sample resource in theconfig/samples/cache_v1_memcached.yaml
file with the following structure:apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3
The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.
The names of all variables in the spec
field are converted to snake case, meaning lowercase with an underscore, by the Operator before running Ansible. For example, serviceAccount
in the spec becomes service_account
in Ansible.
You can disable this case conversion by setting the snakeCaseParameters
option to false
in your watches.yaml
file. It is recommended that you perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.
5.4.2.5. Enabling proxy support
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY
-
HTTPS_PROXY
-
NO_PROXY
This tutorial uses HTTP_PROXY
as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Add the environment variables to the deployment by updating the
roles/memcached/tasks/main.yml
file with the following:... env: - name: HTTP_PROXY value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}' - name: http_proxy value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}' ...
Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yaml
file:containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.4.2.6. Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.4.2.6.1. Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/config
file and run the Operator locally:$ make install run
Example output
... {"level":"info","ts":1612589622.7888272,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612589622.7897573,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612589622.789971,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612589622.7899997,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612589622.8904517,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612589622.8905244,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}
5.4.2.6.2. Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.4.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.4.2.6.3.1. Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc
) v4.15+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.4.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag> 3
- 1
- The
run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
5.4.2.7. Creating a custom resource
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
Memcached
CR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploy
command:$ oc project memcached-operator-system
Edit the sample
Memcached
CR manifest atconfig/samples/cache_v1_memcached.yaml
to contain the following specification:apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3
Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml
Ensure that the
Memcached
Operator creates the deployment for the sample CR with the correct size:$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
Check the CR status:
$ oc get memcached/memcached-sample -o yaml
Example output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
config/samples/cache_v1_memcached.yaml
file to change thespec.size
field in theMemcached
CR from3
to5
:$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge
Confirm that the Operator changes the deployment size:
$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploy
command to test the Operator, run the following command:$ make undeploy
If you used the
operator-sdk run bundle
command to test the Operator, run the following command:$ operator-sdk cleanup <project_name>
5.4.2.8. Additional resources
- See Project layout for Ansible-based Operators to learn about the directory structures created by the Operator SDK.
- If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).
5.4.3. Project layout for Ansible-based Operators
The operator-sdk
CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.4.3.1. Ansible-based project layout
Ansible-based Operator projects generated using the operator-sdk init --plugins ansible
command contain the following directories and files:
File or directory | Purpose |
---|---|
| Dockerfile for building the container image for the Operator. |
| Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
| YAML file containing metadata information for the Operator. |
|
Base CRD files and the |
|
Collects all Operator manifests for deployment. Use by the |
| Controller manager deployment. |
|
|
| Role and role binding for leader election and authentication proxy. |
| Sample resources created for the CRDs. |
| Sample configurations for testing. |
| A subdirectory for the playbooks to run. |
| Subdirectory for the roles tree to run. |
|
Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the |
| YAML file containing the Ansible collections and role dependencies to install during a build. |
| Molecule scenarios for end-to-end testing of your role and Operator. |
5.4.4. Updating projects for newer Operator SDK versions
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.4.4.1. Updating Ansible-based Operator projects for Operator SDK 1.31.0
The following procedure updates an existing Ansible-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Make the following changes to your Operator’s Dockerfile:
Replace the
ansible-operator-2.11-preview
base image with theansible-operator
base image and update the version to 1.31.0, as shown in the following example:Example Dockerfile
FROM quay.io/operator-framework/ansible-operator:v1.31.0
The update to Ansible 2.15.0 in version 1.30.0 of the Ansible Operator removed the following preinstalled Python modules:
-
ipaddress
-
openshift
-
jmespath
-
cryptography
-
oauthlib
If your Operator depends on one of these removed Python modules, update your Dockerfile to install the required modules using the
pip install
command.-
Edit your Operator project’s makefile to update the Operator SDK version to 1.31.0, as shown in the following example:
Example makefile
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1
- 1
- Change the version from
1.28.0
to1.31.0
.
Update your
requirements.yaml
andrequirements.go
files to remove thecommunity.kubernetes
collection and update theoperator_sdk.util
collection to version0.5.0
, as shown in the following example:Example
requirements.yaml
filecollections: - - name: community.kubernetes 1 - version: "2.0.1" - name: operator_sdk.util - version: "0.4.0" + version: "0.5.0" 2 - name: kubernetes.core version: "2.4.0" - name: cloud.common
Remove all instances of the
lint
field from yourmolecule/kind/molecule.yml
andmolecule/default/molecule.yml
files, as shown in the following example:--- dependency: name: galaxy driver: name: delegated - lint: | - set -e - yamllint -d "{extends: relaxed, rules: {line-length: {max: 120}}}" . platforms: - name: cluster groups: - k8s provisioner: name: ansible - lint: | - set -e ansible-lint inventory: group_vars: all: namespace: ${TEST_OPERATOR_NAMESPACE:-osdk-test} host_vars: localhost: ansible_python_interpreter: '{{ ansible_playbook_python }}' config_dir: ${MOLECULE_PROJECT_DIRECTORY}/config samples_dir: ${MOLECULE_PROJECT_DIRECTORY}/config/samples operator_image: ${OPERATOR_IMAGE:-""} operator_pull_policy: ${OPERATOR_PULL_POLICY:-"Always"} kustomize: ${KUSTOMIZE_PATH:-kustomize} env: K8S_AUTH_KUBECONFIG: ${KUBECONFIG:-"~/.kube/config"} verifier: name: ansible - lint: | - set -e - ansible-lint
5.4.4.2. Additional resources
5.4.5. Ansible support in Operator SDK
5.4.5.1. Custom resource files
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
Field | Description |
---|---|
| Version of the CR to be created. |
| Kind of the CR to be created. |
| Kubernetes-specific metadata to be created. |
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
Summarizes the current state of the object. For Ansible-based Operators, the |
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
Annotation | Description |
---|---|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1" kind: "Test1" metadata: name: "example" annotations: ansible.operator-sdk/reconcile-period: "30s"
5.4.5.2. watches.yaml file
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml
file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml
.
Field | Description |
---|---|
| Group of CR to watch. |
| Version of CR to watch. |
| Kind of CR to watch |
|
Path to the Ansible role added to the container. For example, if your |
|
Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the |
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
When set to |
Example watches.yaml
file
- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false
5.4.5.2.1. Advanced options
Advanced features can be enabled by adding them to your watches.yaml
file per GVK. They can go below the group
, version
, kind
and playbook
or role
fields.
Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.
Feature | YAML key | Description | Annotation for override | Default value |
---|---|---|---|---|
Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
Manage status |
|
Allows the Operator to manage the |
| |
Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example watches.yml file with advanced options
- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False
5.4.5.3. Extra variables sent to Ansible
Extra variables can be sent to Ansible, which are then managed by the Operator. The spec
section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook
command.
The Operator also passes along additional variables under the meta
field for the name of the CR and the namespace of the CR.
For the following CR example:
apiVersion: "app.example.com/v1alpha1" kind: "Database" metadata: name: "example" spec: message: "Hello world 2" newParameter: "newParam"
The structure passed to Ansible as extra variables is:
{ "meta": { "name": "<cr_name>", "namespace": "<cr_namespace>", }, "message": "Hello world 2", "new_parameter": "newParam", "_app_example_com_database": { <full_crd> }, }
The message
and newParameter
fields are set in the top level as extra variables, and meta
provides the relevant metadata for the CR as defined in the Operator. The meta
fields can be accessed using dot notation in Ansible, for example:
--- - debug: msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.4.5.4. Ansible Runner directory
Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>
.
Additional resources
-
To learn more about the
runner
directory, see the Ansible Runner documentation.
5.4.6. Kubernetes Collection for Ansible
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
5.4.6.1. Installing the Kubernetes Collection for Ansible
You can install the Kubernetes Collection for Ansible on your local workstation.
Procedure
Install Ansible 2.15+:
$ sudo dnf install ansible
Install the Python Kubernetes client package:
$ pip install kubernetes
Install the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
$ ansible-galaxy collection install community.kubernetes
If you have already initialized your Operator, you might have a
requirements.yml
file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs thecommunity.kubernetes
collection as well as theoperator_sdk.util
collection, which provides modules and plugins for Operator-specific functions.To install the dependent modules from the
requirements.yml
file:$ ansible-galaxy collection install -r requirements.yml
5.4.6.2. Testing the Kubernetes Collection locally
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Prerequisites
- Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
- Install the Kubernetes Collection for Ansible
Procedure
In your Ansible-based Operator project directory, modify the
roles/<kind>/tasks/main.yml
file with the Ansible logic that you want. Theroles/<kind>/
directory is created when you use the--generate-role
flag while creating an API. The<kind>
replaceable matches the kind that you specified for the API.The following example creates and deletes a config map based on the value of a variable named
state
:--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: "{{ state }}" ignore_errors: true 2
Modify the
roles/<kind>/defaults/main.yml
file to setstate
topresent
by default:--- state: present
Create an Ansible playbook by creating a
playbook.yml
file in the top-level of your project directory, and include your<kind>
role:--- - hosts: localhost roles: - <kind>
Run the playbook:
$ ansible-playbook playbook.yml
Example output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Verify that the config map was created:
$ oc get configmaps
Example output
NAME DATA AGE example-config 0 2m1s
Rerun the playbook setting
state
toabsent
:$ ansible-playbook playbook.yml --extra-vars state=absent
Example output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Verify that the config map was deleted:
$ oc get configmaps
5.4.6.3. Next steps
- See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.
5.4.7. Using Ansible inside an Operator
After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml
file.
5.4.7.1. Custom resource files
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
Field | Description |
---|---|
| Version of the CR to be created. |
| Kind of the CR to be created. |
| Kubernetes-specific metadata to be created. |
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
Summarizes the current state of the object. For Ansible-based Operators, the |
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
Annotation | Description |
---|---|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1" kind: "Test1" metadata: name: "example" annotations: ansible.operator-sdk/reconcile-period: "30s"
5.4.7.2. Testing an Ansible-based Operator locally
You can test the logic inside of an Ansible-based Operator running locally by using the make run
command from the top-level directory of your Operator project. The make run
Makefile target runs the ansible-operator
binary locally, which reads from the watches.yaml
file and uses your ~/.kube/config
file to communicate with a Kubernetes cluster just as the k8s
modules do.
You can customize the roles path by setting the environment variable ANSIBLE_ROLES_PATH
or by using the ansible-roles-path
flag. If the role is not found in the ANSIBLE_ROLES_PATH
value, the Operator looks for it in {{current directory}}/roles
.
Prerequisites
- Ansible Runner v2.3.3+
- Ansible Runner HTTP Event Emitter plugin v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
Procedure
Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):
$ make install
Example output
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
Run the
make run
command:$ make run
Example output
/home/user/memcached-operator/bin/ansible-operator run {"level":"info","ts":1612739145.2871568,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} ... {"level":"info","ts":1612739148.347306,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612739148.3488882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612739148.3490262,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612739148.3490646,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612739148.350217,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612739148.3506632,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612739148.350784,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612739148.5511978,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612739148.5512562,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}
With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.
NoteConsider an example
config/samples/<gvk>.yaml
CR manifest:apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"
Because the
spec
field is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.Create an instance of your CR with the default variable
state
set topresent
:$ oc apply -f config/samples/<gvk>.yaml
Check that the
example-config
config map was created:$ oc get configmaps
Example output
NAME STATUS AGE example-config Active 3s
Modify your
config/samples/<gvk>.yaml
file to set thestate
field toabsent
. For example:apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent
Apply the changes:
$ oc apply -f config/samples/<gvk>.yaml
Confirm that the config map is deleted:
$ oc get configmap
5.4.7.3. Testing an Ansible-based Operator on the cluster
After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is preferred for production use.
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.4.7.4. Ansible logs
Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.
5.4.7.4.1. Viewing Ansible logs
Prerequisites
- Ansible-based Operator running as a deployment on a cluster
Procedure
To view logs from an Ansible-based Operator, run the following command:
$ oc logs deployment/<project_name>-controller-manager \ -c manager \1 -n <namespace> 2
Example output
{"level":"info","ts":1612732105.0579333,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} {"level":"info","ts":1612732105.0587437,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {"level":"info","ts":1612732107.768025,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"} {"level":"info","ts":1612732107.768796,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612732107.7688773,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612732107.7688901,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612732107.770032,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator... {"level":"info","ts":1612732107.770202,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612732107.7850506,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612732107.8853772,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612732107.8854098,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}
5.4.7.4.2. Enabling full Ansible results in logs
You can set the environment variable ANSIBLE_DEBUG_LOGS
to True
to enable checking the full Ansible result in logs, which can be helpful when debugging.
Procedure
Edit the
config/manager/manager.yaml
andconfig/default/manager_auth_proxy_patch.yaml
files to include the following configuration:containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"
5.4.7.4.3. Enabling verbose debugging in logs
While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.
Procedure
Add the
ansible.sdk.operatorframework.io/verbosity
annotation to your custom resource to enable the verbosity level that you want. For example:apiVersion: "cache.example.com/v1alpha1" kind: "Memcached" metadata: name: "example-memcached" annotations: "ansible.sdk.operatorframework.io/verbosity": "4" spec: size: 4
5.4.8. Custom resource status management
5.4.8.1. About custom resource status in Ansible-based Operators
Ansible-based Operators automatically update custom resource (CR) status
subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: "True" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: "True" type: Running
Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status
Ansible module, which is included in the operator_sdk.util
collection. This allows the author to update the status
from within Ansible with any key-value pair as desired.
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
5.4.8.2. Tracking custom resource status manually
You can use the operator_sdk.util
collection to modify your Ansible-based Operator to track custom resource (CR) status manually from your application.
Prerequisites
- Ansible-based Operator project created by using the Operator SDK
Procedure
Update the
watches.yaml
file with amanageStatus
field set tofalse
:- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false
Use the
operator_sdk.util.k8s_status
Ansible module to update the subresource. For example, to update with keytest
and valuedata
,operator_sdk.util
can be used as shown:- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: "{{ ansible_operator_meta.name }}" namespace: "{{ ansible_operator_meta.namespace }}" status: test: data
You can declare collections in the
meta/main.yml
file for the role, which is included for scaffolded Ansible-based Operators:collections: - operator_sdk.util
After declaring collections in the role meta, you can invoke the
k8s_status
module directly:k8s_status: ... status: key1: value1
5.5. Helm-based Operators
5.5.1. Getting started with Operator SDK for Helm-based Operators
The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.
5.5.1.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed -
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.5.1.2. Creating and deploying Helm-based Operators
You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir nginx-operator
Change into the project directory:
$ cd nginx-operator
Run the
operator-sdk init
command with thehelm
plugin to initialize the project:$ operator-sdk init \ --plugins=helm
Create an API.
Create a simple Nginx API:
$ operator-sdk create api \ --group demo \ --version v1 \ --kind Nginx
This API uses the built-in Helm chart boilerplate from the
helm create
command.Build and push the Operator image.
Use the default
Makefile
targets to build and push your Operator. SetIMG
with a pull spec for your image that uses a registry you can push to:$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
Run the Operator.
Install the CRD:
$ make install
Deploy the project to the cluster. Set
IMG
to the image that you pushed:$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Add a security context constraint (SCC).
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the
nginx-sample
pod:$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-system
Watch for the CR to reconcile the Operator:
$ oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-system
Delete a CR.
Delete a CR by running the following command:
$ oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.5.1.3. Next steps
- See Operator SDK tutorial for Helm-based Operators for a more in-depth walkthrough on building a Helm-based Operator.
5.5.2. Operator SDK tutorial for Helm-based Operators
Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:
- Create a Nginx deployment
-
Ensure that the deployment size is the same as specified by the
Nginx
custom resource (CR) spec -
Update the
Nginx
CR status using the status writer with the names of thenginx
pods
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdk
CLI tool andcontroller-runtime
library API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.
5.5.2.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed -
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.5.2.2. Creating a project
Use the Operator SDK CLI to create a project called nginx-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/nginx-operator
Change to the directory:
$ cd $HOME/projects/nginx-operator
Run the
operator-sdk init
command with thehelm
plugin to initialize the project:$ operator-sdk init \ --plugins=helm \ --domain=example.com \ --group=demo \ --version=v1 \ --kind=Nginx
NoteBy default, the
helm
plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the--helm-chart
flag, to initialize a project using an existing Helm chart.The
init
command creates thenginx-operator
project specifically for watching a resource with API versionexample.com/v1
and kindNginx
.-
For Helm-based projects, the
init
command generates the RBAC rules in theconfig/rbac/role.yaml
file based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.
5.5.2.2.1. Existing Helm charts
Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:
-
--helm-chart
-
--helm-chart-repo
-
--helm-chart-version
If the --helm-chart
flag is specified, the --group
, --version
, and --kind
flags become optional. If left unset, the following default values are used:
Flag | Value |
---|---|
|
|
|
|
|
|
| Deduced from the specified chart |
If the --helm-chart
flag specifies a local chart archive, for example example-chart-1.2.0.tgz
, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.
If a custom repository URL is not specified by the --helm-chart-repo
flag, the following chart reference formats are supported:
Format | Description |
---|---|
|
Fetch the Helm chart named |
| Fetch the Helm chart archive at the specified URL. |
If a custom repository URL is specified by --helm-chart-repo
, the following chart reference format is supported:
Format | Description |
---|---|
|
Fetch the Helm chart named |
If the --helm-chart-version
flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version
flag is not used when the chart specified with the --helm-chart
flag refers to a specific version, for example when it is a local path or a URL.
For more details and examples, run:
$ operator-sdk init --plugins helm --help
5.5.2.2.2. PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Helm. For example:
domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: "3"
5.5.2.3. Understanding the Operator logic
For this example, the nginx-operator
project executes the following reconciliation logic for each Nginx
custom resource (CR):
- Create an Nginx deployment if it does not exist.
- Create an Nginx service if it does not exist.
- Create an Nginx ingress if it is enabled and does not exist.
-
Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the
Nginx
CR, for example the replica count, image, and service type.
By default, the nginx-operator
project watches Nginx
resource events as shown in the watches.yaml
file and executes Helm releases using the specified chart:
# Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx # +kubebuilder:scaffold:watch
5.5.2.3.1. Sample Helm chart
When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt
template, which Helm chart developers use to convey helpful information about a release.
If you are not already familiar with Helm charts, review the Helm developer documentation.
5.5.2.3.2. Modifying the custom resource spec
Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml
file.
You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.
Procedure
The
helm-charts/nginx/values.yaml
file has a value calledreplicaCount
set to1
by default. To have two Nginx instances in your deployment, your CR spec must containreplicaCount: 2
.Edit the
config/samples/demo_v1_nginx.yaml
file to setreplicaCount: 2
:apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 2
Similarly, the default service port is set to
80
. To use8080
, edit theconfig/samples/demo_v1_nginx.yaml
file to setspec.port: 8080
,which adds the service port override:apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080
The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml
command.
5.5.2.4. Enabling proxy support
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY
-
HTTPS_PROXY
-
NO_PROXY
This tutorial uses HTTP_PROXY
as an example environment variable.
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
watches.yaml
file to include overrides based on an environment variable by adding theoverrideValues
field:... - group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: $HTTP_PROXY ...
Add the
proxy.http
value in thehelm-charts/nginx/values.yaml
file:... proxy: http: "" https: "" no_proxy: ""
To make sure the chart template supports using the variables, edit the chart template in the
helm-charts/nginx/templates/deployment.yaml
file to contain the following:containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: "{{ .Values.proxy.http }}"
Set the environment variable on the Operator deployment by adding the following to the
config/manager/manager.yaml
file:containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.5.2.5. Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/config
file and run the Operator locally:$ make install run
Example output
... {"level":"info","ts":1612652419.9289865,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612652419.9296563,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.example.com/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"} {"level":"info","ts":1612652419.929983,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612652419.930015,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting EventSource","source":"kind source: demo.example.com/v1, Kind=Nginx"} {"level":"info","ts":1612652420.2307851,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting Controller"} {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8}
5.5.2.5.2. Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.5.2.5.3.1. Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc
) v4.15+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag> 3
- 1
- The
run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
5.5.2.6. Creating a custom resource
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Nginx Operator, which provides the
Nginx
CR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploy
command:$ oc project nginx-operator-system
Edit the sample
Nginx
CR manifest atconfig/samples/demo_v1_nginx.yaml
to contain the following specification:apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 3
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the
nginx-sample
pod:$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample
Create the CR:
$ oc apply -f config/samples/demo_v1_nginx.yaml
Ensure that the
Nginx
Operator creates the deployment for the sample CR with the correct size:$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m
Check the pods and CR status to confirm the status is updated with the Nginx pod names.
Check the pods:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
Check the CR status:
$ oc get nginx/nginx-sample -o yaml
Example output
apiVersion: demo.example.com/v1 kind: Nginx metadata: ... name: nginx-sample ... spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
config/samples/demo_v1_nginx.yaml
file to change thespec.size
field in theNginx
CR from3
to5
:$ oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=merge
Confirm that the Operator changes the deployment size:
$ oc get deployments
Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/demo_v1_nginx.yaml
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploy
command to test the Operator, run the following command:$ make undeploy
If you used the
operator-sdk run bundle
command to test the Operator, run the following command:$ operator-sdk cleanup <project_name>
5.5.2.7. Additional resources
- See Project layout for Helm-based Operators to learn about the directory structures created by the Operator SDK.
- If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).
5.5.3. Project layout for Helm-based Operators
The operator-sdk
CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.5.3.1. Helm-based project layout
Helm-based Operator projects generated using the operator-sdk init --plugins helm
command contain the following directories and files:
File/folders | Purpose |
---|---|
| Kustomize manifests for deploying the Operator on a Kubernetes cluster. |
|
Helm chart initialized with the |
|
Used to build the Operator image with the |
| Group/version/kind (GVK) and Helm chart location. |
| Targets used to manage the project. |
| YAML file containing metadata information for the Operator. |
5.5.4. Updating Helm-based projects for newer Operator SDK versions
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.5.4.1. Updating Helm-based Operator projects for Operator SDK 1.31.0
The following procedure updates an existing Helm-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator’s Dockerfile to update the Helm Operator version to 1.31.0, as shown in the following example:
Example Dockerfile
FROM quay.io/operator-framework/helm-operator:v1.31.0 1
- 1
- Update the Helm Operator version from
1.28.0
to1.31.0
Edit your Operator project’s makefile to update the Operator SDK to 1.31.0, as shown in the following example:
Example makefile
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1
- 1
- Change the version from
1.28.0
to1.31.0
.
If you use a custom service account for deployment, define the following role to require a watch operation on your secrets resource, as shown in the following example:
Example
config/rbac/role.yaml
fileapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <operator_name>-admin subjects: - kind: ServiceAccount name: <operator_name> namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: "" rules: 1 - apiGroups: - "" resources: - secrets verbs: - watch
- 1
- Add the
rules
stanza to create a watch operation for your secrets resource.
5.5.4.2. Additional resources
5.5.5. Helm support in Operator SDK
5.5.5.1. Helm charts
One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec
field of the object is a list of configuration options that are typically described in the Helm values.yaml
file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml
), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.
For an example of a simple CR called Tomcat
:
apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2
The replicaCount
value, 2
in this case, is propagated into the template of the chart where the following is used:
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc
command:
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.
5.5.6. Operator SDK tutorial for Hybrid Helm Operators
The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the Operator maturity model.
The Hybrid Helm Operator enhances the existing Helm-based support’s abilities through Go APIs. With this hybrid approach of Helm and Go, the Operator SDK enables Operator authors to use the following process:
- Generate a default structure for, or scaffold, a Go API in the same project as Helm.
-
Configure the Helm reconciler in the
main.go
file of the project, through the libraries provided by the Hybrid Helm Operator.
The Hybrid Helm Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This tutorial walks through the following process using the Hybrid Helm Operator:
-
Create a
Memcached
deployment through a Helm chart if it does not exist -
Ensure that the deployment size is the same as specified by
Memcached
custom resource (CR) spec -
Create a
MemcachedBackup
deployment by using the Go API
5.5.6.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed -
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.5.6.2. Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/github.com/example/memcached-operator
Change to the directory:
$ cd $HOME/github.com/example/memcached-operator
Run the
operator-sdk init
command to initialize the project. This example uses a domain ofmy.domain
so that all API groups are<group>.my.domain
:$ operator-sdk init \ --plugins=hybrid.helm.sdk.operatorframework.io \ --project-version="3" \ --domain my.domain \ --repo=github.com/example/memcached-operator
The
init
command generates the RBAC rules in theconfig/rbac/role.yaml
file based on the resources that would be deployed by the chart’s default manifests. Verify that the rules generated in theconfig/rbac/role.yaml
file meet your Operator’s permission requirements.
Additional resources
- This procedure creates a project structure that is compatible with both Helm and Go APIs. To learn more about the project directory structure, see Project layout.
5.5.6.3. Creating a Helm API
Use the Operator SDK CLI to create a Helm API.
Procedure
Run the following command to create a Helm API with group
cache
, versionv1
, and kindMemcached
:$ operator-sdk create api \ --plugins helm.sdk.operatorframework.io/v1 \ --group cache \ --version v1 \ --kind Memcached
This procedure also configures your Operator project to watch the Memcached
resource with API version v1
and scaffolds a boilerplate Helm chart. Instead of creating the project from the boilerplate Helm chart scaffolded by the Operator SDK, you can alternatively use an existing chart from your local file system or remote chart repository.
For more details and examples for creating Helm API based on existing or new charts, run the following command:
$ operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help
Additional resources
5.5.6.3.1. Operator logic for the Helm API
By default, your scaffolded Operator project watches Memcached
resource events as shown in the watches.yaml
file and executes Helm releases using the specified chart.
Example 5.2. Example watches.yaml
file
# Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch
Additional resources
- For detailed documentation on customizing the Helm Operator logic through the chart, see Understanding the Operator logic.
5.5.6.3.2. Custom Helm reconciler configurations using provided library APIs
A disadvantage of existing Helm-based Operators is the inability to configure the Helm reconciler, because it is abstracted from users. For a Helm-based Operator to reach the Seamless Upgrades capability (level II and later) that reuses an already existing Helm chart, a hybrid between the Go and Helm Operator types adds value.
The APIs provided in the helm-operator-plugins
library allow Operator authors to make the following configurations:
- Customize value mapping based on cluster state
- Execute code in specific events by configuring the reconciler’s event recorder
- Customize the reconciler’s logger
-
Setup
Install
,Upgrade
, andUninstall
annotations to enable Helm’s actions to be configured based on the annotations found in custom resources watched by the reconciler -
Configure the reconciler to run with
Pre
andPost
hooks
The above configurations to the reconciler can be done in the main.go
file:
Example main.go
file
// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf("unable to create reconciler: %s", err)) }
5.5.6.4. Creating a Go API
Use the Operator SDK CLI to create a Go API.
Procedure
Run the following command to create a Go API with group
cache
, versionv1
, and kindMemcachedBackup
:$ operator-sdk create api \ --group=cache \ --version v1 \ --kind MemcachedBackup \ --resource \ --controller \ --plugins=go/v3
When prompted, enter
y
for creating both resource and controller:$ Create Resource [y/n] y Create Controller [y/n] y
This procedure generates the MemcachedBackup
resource API at api/v1/memcachedbackup_types.go
and the controller at controllers/memcachedbackup_controller.go
.
5.5.6.4.1. Defining the API
Define the API for the MemcachedBackup
custom resource (CR).
Represent this Go API by defining the MemcachedBackup
type, which will have a MemcachedBackupSpec.Size
field to set the quantity of Memcached backup instances (CRs) to be deployed, and a MemcachedBackupStatus.Nodes
field to store a CR’s pod names.
The Node
field is used to illustrate an example of a Status
field.
Procedure
Define the API for the
MemcachedBackup
CR by modifying the Go type definitions in theapi/v1/memcachedbackup_types.go
file to have the followingspec
andstatus
:Example 5.3. Example
api/v1/memcachedbackup_types.go
file// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run "make" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run "make" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` }
Update the generated code for the resource type:
$ make generate
TipAfter you modify a
*_types.go
file, you must run themake generate
command to update the generated code for that resource type.After the API is defined with
spec
andstatus
fields and CRD validation markers, generate and update the CRD manifests:$ make manifests
This Makefile target invokes the controller-gen
utility to generate the CRD manifests in the config/crd/bases/cache.my.domain_memcachedbackups.yaml
file.
5.5.6.4.2. Controller implementation
The controller in this tutorial performs the following actions:
-
Create a
Memcached
deployment if it does not exist. -
Ensure that the deployment size is the same as specified by the
Memcached
CR spec. -
Update the
Memcached
CR status with the names of thememcached
pods.
For a detailed explanation on how to configure the controller to perform the above mentioned actions, see Implementing the controller in the Operator SDK tutorial for standard Go-based Operators.
5.5.6.4.3. Differences in main.go
For standard Go-based Operators and the Hybrid Helm Operator, the main.go
file handles the scaffolding the initialization and running of the Manager
program for the Go API. For the Hybrid Helm Operator, however, the main.go
file also exposes the logic for loading the watches.yaml
file and configuring the Helm reconciler.
Example 5.4. Example main.go
file
... for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), ) ...
The manager is initialized with both Helm
and Go
reconcilers:
Example 5.5. Example Helm
and Go
reconcilers
... // Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "controller", "MemcachedBackup") os.Exit(1) } ... // Setup manager with Helm API for _, w := range ws { ... if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "controller", "Helm") os.Exit(1) } setupLog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, "problem running manager") os.Exit(1) }
5.5.6.4.4. Permissions and RBAC manifests
The controller requires certain role-based access control (RBAC) permissions to interact with the resources it manages. For the Go API, these are specified with RBAC markers, as shown in the Operator SDK tutorial for standard Go-based Operators.
For the Helm API, the permissions are scaffolded by default in roles.yaml
. Currently, however, due to a known issue when the Go API is scaffolded, the permissions for the Helm API are overwritten. As a result of this issue, ensure that the permissions defined in roles.yaml
match your requirements.
This known issue is being tracked in https://github.com/operator-framework/helm-operator-plugins/issues/142.
The following is an example role.yaml
for a Memcached Operator:
Example 5.6. Example Helm
and Go
reconcilers
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - "" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - "" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch
Additional resources
5.5.6.5. Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/config
file and run the Operator locally:$ make install run
5.5.6.6. Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-system
and is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac
.Run the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-system
Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.6.7. Creating custom resources
After your Operator is installed, you can test it by creating custom resources (CRs) that are now provided on the cluster by the Operator.
Procedure
Change to the namespace where your Operator is installed:
$ oc project <project_name>-system
Update the sample
Memcached
CR manifest at theconfig/samples/cache_v1_memcached.yaml
file by updating thereplicaCount
field to3
:Example 5.7. Example
config/samples/cache_v1_memcached.yaml
fileapiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: "" image: pullPolicy: IfNotPresent repository: nginx tag: "" imagePullSecrets: [] ingress: annotations: {} className: "" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: "" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: "" tolerations: []
Create the
Memcached
CR:$ oc apply -f config/samples/cache_v1_memcached.yaml
Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m
Update the sample
MemcachedBackup
CR manifest at theconfig/samples/cache_v1_memcachedbackup.yaml
file by updating thesize
to2
:Example 5.8. Example
config/samples/cache_v1_memcachedbackup.yaml
fileapiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2
Create the
MemcachedBackup
CR:$ oc apply -f config/samples/cache_v1_memcachedbackup.yaml
Ensure that the count of
memcachedbackup
pods is the same as specified in the CR:$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m
-
You can update the
spec
in each of the above CRs, and then apply them again. The controller reconciles again and ensures that the size of the pods is as specified in thespec
of the respective CRs. Clean up the resources that have been created as part of this tutorial:
Delete the
Memcached
resource:$ oc delete -f config/samples/cache_v1_memcached.yaml
Delete the
MemcachedBackup
resource:$ oc delete -f config/samples/cache_v1_memcachedbackup.yaml
If you used the
make deploy
command to test the Operator, run the following command:$ make undeploy
5.5.6.8. Project layout
The Hybrid Helm Operator scaffolding is customized to be compatible with both Helm and Go APIs.
File/folders | Purpose |
---|---|
|
Instructions used by a container engine to build your Operator image with the |
| Build file with helper targets to help you work with your project. |
| YAML file containing metadata information for the Operator. Represents the project’s configuration and is used to track useful information for the CLI and plugins. |
|
Contains useful binaries such as the |
| Contains configuration files, including all Kustomize manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.
|
| Contains the Go API definition. |
| Contains the controllers for the Go API. |
| Contains utility files, such as the file used to scaffold the license header for your project files. |
|
Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the |
|
Contains the Helm charts which can be specified using the |
| Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches. |
5.5.7. Updating Hybrid Helm-based projects for newer Operator SDK versions
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.5.7.1. Updating Hybrid Helm-based Operator projects for Operator SDK 1.31.0
The following procedure updates an existing Hybrid Helm-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s makefile to update the Operator SDK version to 1.31.0, as shown in the following example:
Example makefile
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1
- 1
- Change the version from
1.28.0
to1.31.0
.
5.5.7.2. Additional resources
5.6. Java-based Operators
5.6.1. Getting started with Operator SDK for Java-based Operators
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To demonstrate the basics of setting up and running a Java-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Java-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.6.1.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Java 11+
- Maven 3.6.3+
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.6.1.2. Creating and deploying Java-based Operators
You can build and deploy a simple Java-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operator
Change into the project directory:
$ cd memcached-operator
Run the
operator-sdk init
command with thequarkus
plugin to initialize the project:$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --plugins quarkus \ --group cache \ --version v1 \ --kind Memcached
Build and push the Operator image.
Use the default
Makefile
targets to build and push your Operator. SetIMG
with a pull spec for your image that uses a registry you can push to:$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
Run the Operator.
Install the CRD:
$ make install
Deploy the project to the cluster. Set
IMG
to the image that you pushed:$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system
Watch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Delete a CR.
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.6.1.3. Next steps
- See Operator SDK tutorial for Java-based Operators for a more in-depth walkthrough on building a Java-based Operator.
5.6.2. Operator SDK tutorial for Java-based Operators
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdk
CLI tool andjava-operator-sdk
library API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators.
5.6.2.1. Prerequisites
- Operator SDK CLI installed
-
OpenShift CLI (
oc
) 4.15+ installed - Java 11+
- Maven 3.6.3+
-
Logged into an OpenShift Container Platform 4.15 cluster with
oc
with an account that hascluster-admin
permissions - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
Additional resources
5.6.2.2. Creating a project
Use the Operator SDK CLI to create a project called memcached-operator
.
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Run the
operator-sdk init
command with thequarkus
plugin to initialize the project:$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator
5.6.2.2.1. PROJECT file
Among the files generated by the operator-sdk init
command is a Kubebuilder PROJECT
file. Subsequent operator-sdk
commands, as well as help
output, that are run from the project root read this file and are aware that the project type is Java. For example:
domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: "3"
5.6.2.3. Creating an API and controller
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API:
$ operator-sdk create api \ --plugins=quarkus \1 --group=cache \2 --version=v1 \3 --kind=Memcached 4
Verification
Run the
tree
command to view the file structure:$ tree
Example output
. ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files
5.6.2.3.1. Defining the API
Define the API for the Memcached
custom resource (CR).
Procedure
Edit the following files that were generated as part of the
create api
process:Update the following attributes in the
MemcachedSpec.java
file to define the desired state of theMemcached
CR:public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }
Update the following attributes in the
MemcachedStatus.java
file to define the observed state of theMemcached
CR:NoteThe example below illustrates a Node status field. It is recommended that you use typical status properties in practice.
import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }
Update the
Memcached.java
file to define the Schema for Memcached APIs that extends to bothMemcachedSpec.java
andMemcachedStatus.java
files.@Version("v1") @Group("cache.example.com") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}
5.6.2.3.2. Generating CRD manifests
After the API is defined with MemcachedSpec
and MemcachedStatus
files, you can generate CRD manifests.
Procedure
Run the following command from the
memcached-operator
directory to generate the CRD:$ mvn clean install
Verification
Verify the contents of the CRD in the
target/kubernetes/memcacheds.cache.example.com-v1.yml
file as shown in the following example:$ cat target/kubernetes/memcacheds.cache.example.com-v1.yaml
Example output
# Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}
5.6.2.3.3. Creating a Custom Resource
After generating the CRD manifests, you can create the Custom Resource (CR).
Procedure
Create a Memcached CR called
memcached-sample.yaml
:apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1
5.6.2.4. Implementing the controller
After creating a new API and controller, you can implement the controller logic.
Procedure
Append the following dependency to the
pom.xml
file:<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>
For this example, replace the generated controller file
MemcachedReconciler.java
with following example implementation:Example 5.9. Example
MemcachedReconciler.java
package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put("app", "memcached"); labels.put("memcached_cr", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage("memcached:1.4.36-alpine") .withName("memcached") .withCommand("memcached", "-m=64", "-o", "modern", "-v") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName("memcached") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }
The example controller runs the following reconciliation logic for each
Memcached
custom resource (CR):- Creates a Memcached deployment if it does not exist.
-
Ensures that the deployment size matches the size specified by the
Memcached
CR spec. -
Updates the
Memcached
CR status with the names of thememcached
pods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.6.2.4.1. Reconcile loop
Every controller has a reconciler object with a
Reconcile()
method that implements the reconcile loop. The reconcile loop is passed theDeployment
argument, as shown in the following example:Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();
As shown in the following example, if the
Deployment
isnull
, the deployment needs to be created. After you create theDeployment
, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value ofUpdateControl.noUpdate()
, otherwise, return the value of `UpdateControl.updateStatus(resource):if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }
After getting the
Deployment
, get the current and required replicas, as shown in the following example:int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();
If
currentReplicas
does not match therequiredReplicas
, you must update theDeployment
, as shown in the following example:if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }
The following example shows how to obtain the list of pods and their names:
List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:
if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }
5.6.2.4.2. Defining labelsForMemcached
labelsForMemcached
is a utility to return a map of the labels to attach to the resources:
private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put("app", "memcached"); labels.put("memcached_cr", m.getMetadata().getName()); return labels; }
5.6.2.4.3. Define the createMemcachedDeployment
The createMemcachedDeployment
method uses the fabric8 DeploymentBuilder
class:
private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage("memcached:1.4.36-alpine") .withName("memcached") .withCommand("memcached", "-m=64", "-o", "modern", "-v") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName("memcached") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }
5.6.2.5. Running the Operator
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.5.1. Running locally outside the cluster
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to compile the Operator:
$ mvn clean install
Example output
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------
Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
Create a file called
rbac.yaml
as shown in the following example:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""
Run the following command to grant
cluster-admin
privileges to thememcached-quarkus-operator-operator
by applying therbac.yaml
file:$ oc apply -f rbac.yaml
Enter the following command to run the Operator:
$ java -jar target/quarkus-app/quarkus-run.jar
NoteThe
java
command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.Apply the
memcached-sample.yaml
file with the following command:$ kubectl apply -f memcached-sample.yaml
Example output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm that the pod has started:
$ oc get all
Example output
NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
5.6.2.5.2. Running as a deployment on the cluster
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
make
commands to build and push the Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>
, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latest
value to set your default image name.
Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
Example output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
Create a file called
rbac.yaml
as shown in the following example:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""
ImportantThe
rbac.yaml
file will be applied at a later step.Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Run the following command to grant
cluster-admin
privileges to thememcached-quarkus-operator-operator
by applying therbac.yaml
file created in a previous step:$ oc apply -f rbac.yaml
Run the following command to verify that the Operator is running:
$ oc get all -n default
Example output
NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s
Run the following command to apply the
memcached-sample.yaml
and create thememcached-sample
pod:$ oc apply -f memcached-sample.yaml
Example output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm the pods have started:
$ oc get all
Example output
NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
5.6.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager
5.6.2.5.3.1. Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc
) v4.15+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.6.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag> 3
- 1
- The
run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
5.6.2.6. Additional resources
- See Project layout for Java-based Operators to learn about the directory structures created by the Operator SDK.
- If a cluster-wide egress proxy is configured, cluster administrators can override the proxy settings or inject a custom CA certificate for specific Operators running on Operator Lifecycle Manager (OLM).
5.6.3. Project layout for Java-based Operators
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The operator-sdk
CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.6.3.1. Java-based project layout
Java-based Operator projects generated by the operator-sdk init
command contain the following files and directories:
File or directory | Purpose |
---|---|
| File that contains the dependencies required to run the Operator. |
|
Directory that contains the files that represent the API. If the domain is |
| Java file that defines controller implementations. |
| Java file that defines the desired state of the Memcached CR. |
| Java file that defines the observed state of the Memcached CR. |
| Java file that defines the Schema for Memcached APIs. |
| Directory that contains the CRD yaml files. |
5.6.4. Updating projects for newer Operator SDK versions
OpenShift Container Platform 4.15 supports Operator SDK 1.31.0. If you already have the 1.28.0 CLI installed on your workstation, you can update the CLI to 1.31.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.31.0, update steps are required for the associated breaking changes introduced since 1.28.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.28.0.
5.6.4.1. Updating Java-based Operator projects for Operator SDK 1.31.0
The following procedure updates an existing Java-based Operator project for compatibility with 1.31.0.
Prerequisites
- Operator SDK 1.31.0 installed
- An Operator project created or maintained with Operator SDK 1.28.0
Procedure
Edit your Operator project’s makefile to update the Operator SDK version to 1.31.0, as shown in the following example:
Example makefile
# Set the Operator SDK version to use. By default, what is installed on the system is used. # This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1
- 1
- Change the version from
1.28.0
to1.31.0
.
5.6.4.2. Additional resources
5.7. Defining cluster service versions (CSVs)
A cluster service version (CSV), defined by a ClusterServiceVersion
object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
5.7.1. How CSV generation works
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle
subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.
Typically, the generate kustomize manifests
subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle
subcommand. However, the Operator SDK provides the make bundle
command, which automates several tasks, including running the following subcommands in order:
-
generate kustomize manifests
-
generate bundle
-
bundle validate
Additional resources
- See Bundling an Operator for a full procedure that includes generating a bundle and CSV.
5.7.1.1. Generated files and resources
The make bundle
command creates the following files and directories in your Operator project:
-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
(CSV) object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
The following resources are typically included in a CSV:
- Role
- Defines Operator permissions within a namespace.
- ClusterRole
- Defines cluster-wide Operator permissions.
- Deployment
- Defines how an Operand of an Operator is run in pods.
- CustomResourceDefinition (CRD)
- Defines custom resources that your Operator reconciles.
- Custom resource examples
- Examples of resources adhering to the spec of a particular CRD.
5.7.1.2. Version management
The --version
flag for the generate bundle
subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.
By setting the VERSION
variable in your Makefile
, the --version
flag is automatically invoked using that value when the generate bundle
subcommand is run by the make bundle
command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.
5.7.2. Manually-defined CSV fields
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
Field | Description |
---|---|
|
A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example |
|
The capability level according to the Operator maturity model. Options include |
| A public name to identify the Operator. |
| A short description of the functionality of the Operator. |
| Keywords describing the Operator. |
|
Human or organizational entities maintaining the Operator, with a |
|
The provider of the Operator (usually an organization), with a |
| Key-value pairs to be used by Operator internals. |
|
Semantic version of the Operator, for example |
|
Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
Field | Description |
---|---|
| The name of the CSV being replaced by this CSV. |
|
URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a |
| Selectors by which the Operator can pair resources in a cluster. |
|
A base64-encoded icon unique to the Operator, set in a |
|
The level of maturity the software has achieved at this version. Options include |
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.
Additional resources
5.7.3. Operator metadata annotations
Operator developers can set certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub or the Red Hat Ecosystem Catalog. Operator metadata annotations are manually defined by setting the metadata.annotations
field in the CSV YAML file.
5.7.3.1. Infrastructure features annotations
Annotations in the features.operators.openshift.io
group detail the infrastructure features that an Operator might support, specified by setting a "true"
or "false"
value. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog. These annotations are supported in OpenShift Container Platform 4.10 and later.
The features.operators.openshift.io
infrastructure feature annotations deprecate the operators.openshift.io/infrastructure-features
annotations used in earlier versions of OpenShift Container Platform. See "Deprecated infrastructure feature annotations" for more information.
Annotation | Description | Valid values[1] |
---|---|---|
|
Specify whether an Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. The Operator leverages the |
|
| Specify whether an Operator accepts the FIPS-140 configuration of the underlying platform and works on nodes that are booted into FIPS mode. In this mode, the Operator and any workloads it manages (operands) are solely calling the Red Hat Enterprise Linux (RHEL) cryptographic library submitted for FIPS-140 validation. |
|
|
Specify whether an Operator supports running on a cluster behind a proxy by accepting the standard |
|
| Specify whether an Operator implements well-known tunables to modify the TLS cipher suite used by the Operator and, if applicable, any of the workloads it manages (operands). |
|
| Specify whether an Operator supports configuration for tokenized authentication with AWS APIs via AWS Secure Token Service (STS) by using the Cloud Credential Operator (CCO). |
|
| Specify whether an Operator supports configuration for tokenized authentication with Azure APIs via Azure Managed Identity by using the Cloud Credential Operator (CCO). |
|
| Specify whether an Operator supports configuration for tokenized authentication with Google Cloud APIs via GCP Workload Identity Foundation (WIF) by using the Cloud Credential Operator (CCO). |
|
| Specify whether an Operator provides a Cloud-Native Network Function (CNF) Kubernetes plugin. |
|
| Specify whether an Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
|
| Specify whether an Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
|
- Valid values are shown intentionally with double quotes, because Kubernetes annotations must be strings.
Example CSV with infrastructure feature annotations
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: "true" features.operators.openshift.io/fips-compliant: "false" features.operators.openshift.io/proxy-aware: "false" features.operators.openshift.io/tls-profiles: "false" features.operators.openshift.io/token-auth-aws: "false" features.operators.openshift.io/token-auth-azure: "false" features.operators.openshift.io/token-auth-gcp: "false"
Additional resources
5.7.3.2. Deprecated infrastructure feature annotations
Starting in OpenShift Container Platform 4.14, the operators.openshift.io/infrastructure-features
group of annotations are deprecated by the group of annotations with the features.operators.openshift.io
namespace. While you are encouraged to use the newer annotations, both groups are currently accepted when used in parallel.
These annotations detail the infrastructure features that an Operator supports. Users can view and filter by these features when discovering Operators through OperatorHub in the web console or on the Red Hat Ecosystem Catalog.
Valid annotation values | Description |
---|---|
| Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator. |
| Operator provides a Cloud-native Network Functions (CNF) Kubernetes plugin. |
| Operator provides a Container Network Interface (CNI) Kubernetes plugin. |
| Operator provides a Container Storage Interface (CSI) Kubernetes plugin. |
| Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. |
|
Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables |
Example CSV with disconnected
and proxy-aware
support
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
5.7.3.3. Other optional annotations
The following Operator annotations are optional.
Annotation | Description |
---|---|
| Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
Specify a single required custom resource by adding |
| Set a suggested namespace where the Operator should be deployed. |
|
Set a manifest for a |
|
Free-form array for listing any specific subscriptions that are required to use the Operator. For example, |
| Hides CRDs in the UI that are not meant for user manipulation. |
Example CSV with an OpenShift Container Platform license requirement
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
Example CSV with a 3scale license requirement
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
5.7.4. Enabling your Operator for restricted network environments
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Operator requirements for supporting disconnected mode
- Replace hard-coded image references with environment variables.
In the cluster service version (CSV) of your Operator:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
- All dependencies of your Operator must also support running in a disconnected mode.
- Your Operator must not require any off-cluster resources.
Prerequisites
- An Operator project with a CSV. The following procedure uses the Memcached Operator as an example for Go-, Ansible-, and Helm-based projects.
Procedure
Set an environment variable for the additional image references used by the Operator in the
config/manager/manager.yaml
file:Example 5.10. Example
config/manager/manager.yaml
file... spec: ... spec: ... containers: - command: - /manager ... env: - name: <related_image_environment_variable> 1 value: "<related_image_reference_with_tag>" 2
Replace hard-coded image references with environment variables in the relevant file for your Operator project type:
For Go-based Operator projects, add the environment variable to the
controllers/memcached_controller.go
file as shown in the following example:Example 5.11. Example
controllers/memcached_controller.go
file// deploymentForMemcached returns a memcached Deployment object ... Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: "memcached:1.4.36-alpine", 1 + Image: os.Getenv("<related_image_environment_variable>"), 2 Name: "memcached", Command: []string{"memcached", "-m=64", "-o", "modern", "-v"}, Ports: []corev1.ContainerPort{{ ...
NoteThe
os.Getenv
function returns an empty string if a variable is not set. Set the<related_image_environment_variable>
before changing the file.For Ansible-based Operator projects, add the environment variable to the
roles/memcached/tasks/main.yml
file as shown in the following example:Example 5.12. Example
roles/memcached/tasks/main.yml
filespec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: "docker.io/memcached:1.4.36-alpine" 1 + image: "{{ lookup('env', '<related_image_environment_variable>') }}" 2 ports: - containerPort: 11211 ...
For Helm-based Operator projects, add the
overrideValues
field to thewatches.yaml
file as shown in the following example:Example 5.13. Example
watches.yaml
file... - group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: ${<related_image_environment_variable>} 2
Add the value of the
overrideValues
field to thehelm-charts/memchached/values.yaml
file as shown in the following example:Example
helm-charts/memchached/values.yaml
file... relatedImage: ""
Edit the chart template in the
helm-charts/memcached/templates/deployment.yaml
file as shown in the following example:Example 5.14. Example
helm-charts/memcached/templates/deployment.yaml
filecontainers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: "{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: "{{ .Values.relatedImage }}" 3
Add the
BUNDLE_GEN_FLAGS
variable definition to yourMakefile
with the following changes:Example
Makefile
BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif ... - $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) 1 + $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS) 2 ...
To update your Operator image to use a digest (SHA) and not a tag, run the
make bundle
command and setUSE_IMAGE_DIGESTS
totrue
:$ make bundle USE_IMAGE_DIGESTS=true
Add the
disconnected
annotation, which indicates that the Operator works in a disconnected environment:metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'
Operators can be filtered in OperatorHub by this infrastructure feature.
5.7.5. Enabling your Operator for multiple architectures and operating systems
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest
API.
If a CSV does not include an os
label, it is treated as if it has the following Linux support label by default:
labels: operatorframework.io/os.linux: supported
If a CSV does not include an arch
label, it is treated as if it has the following AMD64 support label by default:
labels: operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in the
metadata.labels
of your CSV for each supported architecture and operating system that your Operator supports:labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2
Additional resources
- See the Image Manifest V 2, Schema 2 specification for more information on manifest lists.
5.7.5.1. Architecture and operating system support for Operators
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
Architecture | String |
---|---|
AMD64 |
|
ARM64 |
|
IBM Power® |
|
IBM Z® |
|
Operating system | String |
---|---|
Linux |
|
z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
5.7.6. Setting a suggested namespace
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
operatorframework.io/suggested-namespace
annotation to your suggested namespace:metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1
- 1
- Set your suggested namespace.
5.7.7. Setting a suggested namespace with default node selector
Some Operators expect to run only on control plane nodes, which can be done by setting a nodeSelector
in the Pod
spec by the Operator itself.
To avoid getting duplicated and potentially conflicting cluster-wide default nodeSelector
, you can set a default node selector on the namespace where the Operator runs. The default node selector will take precedence over the cluster default so the cluster default will not be applied to the pods in the Operators namespace.
When adding the Operator to a cluster using OperatorHub, the web console auto-populates the suggested namespace for the cluster administrator during the installation process. The suggested namespace is created using the namespace manifest in YAML which is included in the cluster service version (CSV).
Procedure
In your CSV, set the
operatorframework.io/suggested-namespace-template
with a manifest for aNamespace
object. The following sample is a manifest for an exampleNamespace
with the namespace default node selector specified:metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { "apiVersion": "v1", "kind": "Namespace", "metadata": { "name": "vertical-pod-autoscaler-suggested-template", "annotations": { "openshift.io/node-selector": "" } } }
- 1
- Set your suggested namespace.
NoteIf both
suggested-namespace
andsuggested-namespace-template
annotations are present in the CSV,suggested-namespace-template
should take precedence.
5.7.8. Enabling Operator conditions
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition
custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition
custom resource (CR), the behavior of OLM changes accordingly.
To support Operator conditions, an Operator must be able to read the OperatorCondition
CR created by OLM and have the ability to complete the following tasks:
- Get the specific condition.
- Set the status of a specific condition.
This can be accomplished by using the operator-lib
library. An Operator author can provide a controller-runtime
client in their Operator for the library to access the OperatorCondition
CR owned by the Operator in the cluster.
The library provides a generic Conditions
interface, which has the following methods to Get
and Set
a conditionType
in the OperatorCondition
CR:
Get
-
To get the specific condition, the library uses the
client.Get
function fromcontroller-runtime
, which requires anObjectKey
of typetypes.NamespacedName
present inconditionAccessor
. Set
-
To update the status of the specific condition, the library uses the
client.Update
function fromcontroller-runtime
. An error occurs if theconditionType
is not present in the CRD.
The Operator is allowed to modify only the status
subresource of the CR. Operators can either delete or update the status.conditions
array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream Condition GoDocs.
Operator SDK 1.31.0 supports operator-lib
v0.11.0.
Prerequisites
- An Operator project generated using the Operator SDK.
Procedure
To enable Operator conditions in your Operator project:
In the
go.mod
file of your Operator project, addoperator-framework/operator-lib
as a required library:module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )
Write your own constructor in your Operator logic that will result in the following outcomes:
-
Accepts a
controller-runtime
client. -
Accepts a
conditionType
. -
Returns a
Condition
interface to update or add conditions.
Because OLM currently supports the
Upgradeable
condition, you can create an interface that has methods to access theUpgradeable
condition. For example:import ( ... apiv1 "github.com/operator-framework/api/pkg/operators/v1" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, "apiv1.OperatorUpgradeable") } cond, err := NewUpgradeable(cl);
In this example, the
NewUpgradeable
constructor is further used to create a variablecond
of typeCondition
. Thecond
variable would in turn haveGet
andSet
methods, which can be used for handling the OLMUpgradeable
condition.-
Accepts a
Additional resources
5.7.9. Defining webhooks
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a webhookdefinitions
section to define the following types of webhooks:
- Admission webhooks (validating and mutating)
- Conversion webhooks
Procedure
Add a
webhookdefinitions
section to thespec
section of the CSV of your Operator and include any webhook definitions using atype
ofValidatingAdmissionWebhook
,MutatingAdmissionWebhook
, orConversionWebhook
. The following example contains all three types of webhooks:CSV containing webhooks
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook ... ... ... strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5 ...
Additional resources
- Types of webhook admission plugins
Kubernetes documentation:
5.7.9.1. Webhook considerations for OLM
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
-
The
type
field must be set to eitherValidatingAdmissionWebhook
,MutatingAdmissionWebhook
, orConversionWebhook
, or the CSV will be placed in a failed phase. -
The CSV must contain a deployment whose name is equivalent to the value supplied in the
deploymentName
field of thewebhookdefinition
.
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
Certificate authority constraints
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
-
The TLS certificate file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.crt
. -
The TLS key file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.key
.
Admission webhook rules constraints
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
- Requests that target all groups
-
Requests that target the
operators.coreos.com
group -
Requests that target the
ValidatingWebhookConfigurations
orMutatingWebhookConfigurations
resources
Conversion webhook constraints
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
-
CSVs featuring a conversion webhook can only support the
AllNamespaces
install mode. -
The CRD targeted by the conversion webhook must have its
spec.preserveUnknownFields
field set tofalse
ornil
. - The conversion webhook defined in the CSV must target an owned CRD.
- There can only be one conversion webhook on the entire cluster for a given CRD.
5.7.10. Understanding your custom resource definitions (CRDs)
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
5.7.10.1. Owned CRDs
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
Field | Description | Required/optional |
---|---|---|
| The full name of your CRD. | Required |
| The version of that object API. | Required |
| The machine readable name of your CRD. | Required |
|
A human readable version of your CRD name, for example | Required |
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
The API group that this CRD belongs to, for example | Optional |
|
Your CRDs own one or more types of Kubernetes objects. These are listed in the It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
| These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a MongoDB Standalone
CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:
Example owned CRD
- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.
5.7.10.2. Required CRDs
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
Field | Description | Required/optional |
---|---|---|
| The full name of the CRD you require. | Required |
| The version of that object API. | Required |
| The Kubernetes object kind. | Required |
| A human readable version of the CRD. | Required |
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.
5.7.10.3. CRD upgrades
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.10.3.1. Adding a new CRD version
Procedure
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the
versions
section of your CSV.For example, if the current CRD has a version
v1alpha1
and you want to add a new versionv1beta1
and mark it as the new storage version, add a new entry forv1beta1
:versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true
- 1
- New entry.
Ensure the referencing version of the CRD in the
owned
section of your CSV is updated if the CSV intends to use the new version:customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster
- 1
- Update the
version
.
- Push the updated CRD and CSV to your bundle.
5.7.10.3.2. Deprecating or removing a CRD version
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served
field in the CRD to false
. Then, the non-serving version can be removed on the subsequent CRD upgrade.
Procedure
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions: - name: v1alpha1 served: false 1 storage: true
- 1
- Set to
false
.
Switch the
storage
version to a serving version if the version to be deprecated is currently thestorage
version. For example:versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2
NoteTo remove a specific version that is or was the
storage
version from a CRD, that version must be removed from thestoredVersion
in the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.- Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions: - name: v1beta1 served: true storage: true
-
Ensure the referencing CRD version in the
owned
section of your CSV is updated accordingly if that version is removed from the CRD.
5.7.10.4. CRD templates
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples
. Compatible UIs will pre-fill this template for users to further customize.
The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata
and spec
of the Kubernetes object.
The following full example provides templates for EtcdCluster
, EtcdBackup
and EtcdRestore
:
metadata: annotations: alm-examples: >- [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"<operator_namespace>"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
5.7.10.5. Hiding internal objects
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication
CRD that is created whenever a user creates a Database object with replication: true
.
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects
annotation to the cluster service version (CSV) of your Operator.
Procedure
-
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or
spec
block of your CR, if applicable to your Operator. Add the
operators.operatorframework.io/internal-objects
annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:Internal object annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' 1 ...
- 1
- Set any internal CRDs as an array of strings.
5.7.10.6. Initializing required custom resources
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding operatorframework.io/initialization-resource
to the cluster service version (CSV) during Operator installation. You are then prompted to create the custom resource through a template that is provided in the CSV. The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Procedure
Add the
operatorframework.io/initialization-resource
annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of aStorageCluster
resource and provides a full YAML definition:Initialization resource annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { "apiVersion": "ocs.openshift.io/v1", "kind": "StorageCluster", "metadata": { "name": "example-storagecluster" }, "spec": { "manageNodes": false, "monPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } }, "storageClassName": "gp2" } }, "storageDeviceSets": [ { "count": 3, "dataPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "1Ti" } }, "storageClassName": "gp2", "volumeMode": "Block" } }, "name": "example-deviceset", "placement": {}, "portable": true, "resources": {} } ] } } ...
5.7.11. Understanding your API services
As with CRDs, there are two types of API services that your Operator may use: owned and required.
5.7.11.1. Owned API services
When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server
that backs it and the group/version/kind (GVK) it provides.
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
Field | Description | Required/optional |
---|---|---|
|
Group that the API service provides, for example | Required |
|
Version of the API service, for example | Required |
| A kind that the API service is expected to provide. | Required |
| The plural name for the API service provided. | Required |
|
Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the | Required |
|
A human readable version of your API service name, for example | Required |
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
| Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
| Essentially the same as for owned CRDs. | Optional |
5.7.11.1.1. API service resource creation
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
-
Service pod selectors are copied from the CSV deployment matching the
DeploymentName
field of the API service description. - A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.11.1.2. API service serving certificates
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service
resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.
The certificate is stored as a type kubernetes.io/tls
secret in the deployment namespace, and a volume named apiservice-cert
is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName
field of the API service description.
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates
and any existing volume mounts with the same path are replaced.
5.7.11.2. Required API services
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
Field | Description | Required/optional |
---|---|---|
|
Group that the API service provides, for example | Required |
|
Version of the API service, for example | Required |
| A kind that the API service is expected to provide. | Required |
|
A human readable version of your API service name, for example | Required |
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
5.8. Working with bundle images
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
5.8.1. Bundling an Operator
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc
) v4.15+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
make
commands in your Operator project directory to build and push your Operator image. Modify theIMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64
forgo build
. This can be amended toGOARCH=$TARGETARCH
for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform
. With Buildah, the–build-arg
will need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
make bundle
command, which invokes several commands, including the Operator SDKgenerate bundle
andbundle validate
subcommands:$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundle
command creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifests
that contains aClusterServiceVersion
object -
A bundle metadata directory named
bundle/metadata
-
All custom resource definitions (CRDs) in a
config/crd
directory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validate
to ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMG
with the details for the registry, user namespace, and image tag where you intend to push the image:$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.8.2. Deploying an Operator with Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc
) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag> 3
- 1
- The
run bundle
command creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/config
file. You can add the-n
flag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latest
as the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
run bundle
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup
,Subscription
,InstallPlan
, and all other required resources, including RBAC.
Additional resources
- File-based catalogs in Operator Framework packaging format
- File-based catalogs in Managing custom catalogs
- Bundle format
5.8.3. Publishing a catalog containing a bundled Operator
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the opm
CLI to facilitate index image creation. Experience with the opm
command is not required. For advanced use cases, the opm
command can be used directly instead of the Operator SDK.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1
CRDs, for example OpenShift Container Platform 4.15) -
Logged in to the cluster with
oc
using an account withcluster-admin
permissions
Procedure
Run the following
make
command in your Operator project directory to build an index image containing your Operator bundle:$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
where the
CATALOG_IMG
argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Push the built index image to a repository:
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
TipYou can use Operator SDK
make
commands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:$ make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec>
Alternatively, you can set the
IMAGE_TAG_BASE
field in yourMakefile
to an existing repository:IMAGE_TAG_BASE=quay.io/example/my-operator
You can then use the following syntax to build and push images with automatically-generated names, such as
quay.io/example/my-operator-bundle:v0.0.1
for the bundle image andquay.io/example/my-operator-catalog:v0.0.1
for the index image:$ make bundle-build bundle-push catalog-build catalog-push
Define a
CatalogSource
object that references the index image you just generated, and then create the object by using theoc apply
command or web console:Example
CatalogSource
YAMLapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m
- 1
- Specify the value of
legacy
orrestricted
. If the field is not set, the default value islegacy
. In a future OpenShift Container Platform release, it is planned that the default value will berestricted
. If your catalog cannot run withrestricted
permissions, it is recommended that you manually set this field tolegacy
. - 2
- Set
image
to the image pull spec you used previously with theCATALOG_IMG
argument.
Check the catalog source:
$ oc get catalogsource
Example output
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m
Verification
Install the Operator using your catalog:
Define an
OperatorGroup
object and create it by using theoc apply
command or web console:Example
OperatorGroup
YAMLapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>
Define a
Subscription
object and create it by using theoc apply
command or web console:Example
Subscription
YAMLapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: "alpha" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1
Verify the installed Operator is running:
Check the Operator group:
$ oc get og
Example output
NAME AGE my-test 4h40m
Check the cluster service version (CSV):
$ oc get csv
Example output
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded
Check the pods for the Operator:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m
Additional resources
-
See Managing custom catalogs for details on direct usage of the
opm
CLI for more advanced use cases.
5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The run bundle-upgrade
subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.
Prerequisites
-
Operator installed with OLM either by using the
run bundle
subcommand or with traditional OLM installation - A bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
run bundle
subcommand or with traditional OLM installation.NoteIf the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
run bundle-upgrade
subcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).For example, you can use the following
run bundle
subcommand for a Memcached Operator by specifying the earlier bundle image:$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1
Example output
INFO[0006] Creating a File-Based Catalog of the bundle "quay.io/demo/memcached-operator:v0.0.1" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup "operator-sdk-og" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion ""my-project/memcached-operator.v0.0.1" to appear INFO[0026] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Pending INFO[0028] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Installing INFO[0059] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Succeeded INFO[0059] OLM has successfully installed "memcached-operator.v0.0.1"
Upgrade the installed Operator by specifying the bundle image for the later Operator version:
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2
Example output
INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Pending INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"
Clean up the installed Operators:
$ operator-sdk cleanup memcached-operator
Additional resources
5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.
As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
APIRemovedInNextReleaseInUse
- APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse
- APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
olm.maxOpenShiftVersion
annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:ImportantYou must use
olm.maxOpenShiftVersion
annotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, administrators may uninstall your Operator and can upgrade the cluster version.Example CSV with
olm.maxOpenShiftVersion
annotationapiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]' 1
- 1
- Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting
value
to4.9
prevents cluster upgrades to OpenShift Container Platform versions later than 4.9 when this bundle is installed on a cluster.
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
NoteThis step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".
Set the
com.redhat.openshift.versions
annotation in your project’sbundle/metadata/annotations.yaml
file:Example
bundle/metadata/annotations.yaml
file with compatible versionscom.redhat.openshift.versions: "v4.7-v4.9" 1
- 1
- Set to a range or single version.
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper
com.redhat.openshift.versions
label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thebundle.Dockerfile
file:Example
bundle.Dockerfile
with compatible versionsLABEL com.redhat.openshift.versions="<versions>" 1
- 1
- Set to a range or single version, for example,
v4.7-v4.9
. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
Additional resources
- Managing OpenShift Versions in the Certified Operator Build Guide
- Updating installed Operators
- Red Hat-provided Operator catalogs
5.8.6. Additional resources
- See Operator Framework packaging format for details on the bundle format.
-
See Managing custom catalogs for details on adding bundle images to index images by using the
opm
command. - See Operator Lifecycle Manager workflow for details on how upgrades work for installed Operators.
5.9. Complying with pod security admission
Pod security admission is an implementation of the Kubernetes pod security standards. Pod security admission restricts the behavior of pods. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
If your Operator project does not require escalated permissions to run, you can ensure your workloads run in namespaces set to the restricted
pod security level. If your Operator project requires escalated permissions to run, you must set the following security context configurations:
- The allowed pod security admission level for the Operator’s namespace
- The allowed security context constraints (SCC) for the workload’s service account
For more information, see Understanding and managing pod security admission.
5.9.1. About pod security admission
OpenShift Container Platform includes Kubernetes pod security admission. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
Globally, the privileged
profile is enforced, and the restricted
profile is used for warnings and audits.
You can also configure the pod security admission settings at the namespace level.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
5.9.1.1. Pod security admission modes
You can configure the following pod security admission modes for a namespace:
Mode | Label | Description |
---|---|---|
|
| Rejects a pod from admission if it does not comply with the set profile |
|
| Logs audit events if a pod does not comply with the set profile |
|
| Displays warnings if a pod does not comply with the set profile |
5.9.1.2. Pod security admission profiles
You can set each of the pod security admission modes to one of the following profiles:
Profile | Description |
---|---|
| Least restrictive policy; allows for known privilege escalation |
| Minimally restrictive policy; prevents known privilege escalations |
| Most restrictive policy; follows current pod hardening best practices |
5.9.1.3. Privileged namespaces
The following system namespaces are always set to the privileged
pod security admission profile:
-
default
-
kube-public
-
kube-system
You cannot change the pod security profile for these privileged namespaces.
5.9.2. About pod security admission synchronization
In addition to the global pod security admission control configuration, a controller applies pod security admission control warn
and audit
labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace.
The controller examines ServiceAccount
object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn
and audit
labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created.
Namespace labeling is based on consideration of namespace-local service account privileges.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.
5.9.2.1. Pod security admission synchronization namespace exclusions
Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-*
prefixed namespaces, but you can enable synchronization on them later.
If a pod security admission label (pod-security.kubernetes.io/<mode>
) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label.
If necessary, you can enable synchronization again by using one of the following methods:
- By removing the modified pod security admission label from the namespace
By setting the
security.openshift.io/scc.podSecurityLabelSync
label totrue
If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten.
Permanently disabled namespaces
Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled:
-
default
-
kube-node-lease
-
kube-system
-
kube-public
-
openshift
-
All system-created namespaces that are prefixed with
openshift-
, except foropenshift-operators
Initially disabled namespaces
By default, all namespaces that have an openshift-
prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-*
namespaces and for the openshift-operators
namespace.
You cannot enable synchronization for any system-created openshift-*
namespaces, except for openshift-operators
.
If an Operator is installed in a user-created openshift-*
namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace.
5.9.3. Ensuring Operator workloads run in namespaces set to the restricted pod security level
To ensure your Operator project can run on a wide variety of deployments and environments, configure the Operator’s workloads to run in namespaces set to the restricted
pod security level.
You must leave the runAsUser
field empty. If your image requires a specific user, it cannot be run under restricted security context constraints (SCC) and restricted pod security enforcement.
Procedure
To configure Operator workloads to run in namespaces set to the
restricted
pod security level, edit your Operator’s namespace definition similar to the following examples:ImportantIt is recommended that you set the seccomp profile in your Operator’s namespace definition. However, setting the seccomp profile is not supported in OpenShift Container Platform 4.10.
For Operator projects that must run in only OpenShift Container Platform 4.11 and later, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yaml
file... spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL ...
- 1
- By setting the seccomp profile type to
RuntimeDefault
, the SCC defaults to the pod security profile of the namespace.
For Operator projects that must also run in OpenShift Container Platform 4.10, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yaml
file... spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL ...
- 1
- Leaving the seccomp profile type unset ensures your Operator project can run in OpenShift Container Platform 4.10.
Additional resources
5.9.4. Managing pod security admission for Operator workloads that require escalated permissions
If your Operator project requires escalated permissions to run, you must edit your Operator’s cluster service version (CSV).
Procedure
Set the security context configuration to the required permission level in your Operator’s CSV, similar to the following example:
Example
<operator_name>.clusterserviceversion.yaml
file with network administrator privileges... containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - "NET_ADMIN" ...
Set the service account privileges that allow your Operator’s workloads to use the required security context constraints (SCC), similar to the following example:
Example
<operator_name>.clusterserviceversion.yaml
file... install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default ...
Edit your Operator’s CSV description to explain why your Operator project requires escalated permissions similar to the following example:
Example
<operator_name>.clusterserviceversion.yaml
file... spec: apiservicedefinitions:{} ... description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.
5.9.5. Additional resources
5.10. Token authentication
5.10.1. Token authentication for Operators on cloud providers
Many cloud providers can enable authentication by using account tokens that provide short-term, limited-privilege security credentials.
OpenShift Container Platform includes the Cloud Credential Operator (CCO) to manage cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest
custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with any specific permissions required.
Previously, on clusters where the CCO is in manual mode, Operators managed by Operator Lifecycle Manager (OLM) often provided detailed instructions in the OperatorHub for how users could manually provision any required cloud credentials.
Starting in OpenShift Container Platform 4.14, the CCO can detect when it is running on clusters enabled to use short-term credentials on certain cloud providers. It can then semi-automate provisioning certain credentials, provided that the Operator author has enabled their Operator to support the updated CCO.
5.10.2. CCO-based workflow for OLM-managed Operators with AWS STS
When an OpenShift Container Platform cluster running on AWS is in Security Token Service (STS) mode, it means the cluster is utilizing features of AWS and OpenShift Container Platform to use IAM roles at an application level. STS enables applications to provide a JSON Web Token (JWT) that can assume an IAM role.
The JWT includes an Amazon Resource Name (ARN) fo