Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Developing Operators
5.1. About the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.
Why use the Operator SDK?
The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.
The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
- Extensions to cover common Operator use cases
- Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.11 supports Operator SDK v1.22.2.
5.1.1. What are Operators? Copia collegamentoCollegamento copiato negli appunti!
For an overview about basic Operator concepts and terminology, see Understanding Operators.
5.1.2. Development workflow Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK provides the following workflow to develop a new Operator:
- Create an Operator project by using the Operator SDK command-line interface (CLI).
- Define new resource APIs by adding custom resource definitions (CRDs).
- Specify resources to watch by using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 5.1. Operator SDK workflow
At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
5.2. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.
Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.11 supports Operator SDK v1.22.2.
5.2.1. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift SDK CLI tool on Linux.
Prerequisites
- Go v1.18+
-
v17.03+,
dockerv1.9.3+, orpodmanv1.7+buildah
Procedure
- Navigate to the OpenShift mirror site.
- From the latest 4.11 directory, download the latest version of the tarball for Linux.
Unpack the archive:
$ tar xvf operator-sdk-v1.22.2-ocp-linux-x86_64.tar.gzMake the file executable:
$ chmod +x operator-sdkMove the extracted
binary to a directory that is on youroperator-sdk.PATHTipTo check your
:PATH$ echo $PATH$ sudo mv ./operator-sdk /usr/local/bin/operator-sdk
Verification
After you install the Operator SDK CLI, verify that it is available:
$ operator-sdk versionExample output
operator-sdk version: "v1.22.2-ocp", ...
5.3. Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.3.1. Getting started with Operator SDK for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.3.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Go v1.18+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.3.1.2. Creating and deploying Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operatorChange into the project directory:
$ cd memcached-operatorRun the
command to initialize the project:operator-sdk init$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorThe command uses the Go plugin by default.
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind MemcachedBuild and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Delete a CR
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached -n memcached-operator-systemClean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.3.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator.
5.3.2. Operator SDK tutorial for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.
5.3.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Go v1.18+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.3.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operatorChange to the directory:
$ cd $HOME/projects/memcached-operatorActivate support for Go modules:
$ export GO111MODULE=onRun the
command to initialize the project:operator-sdk init$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorNoteThe
command uses the Go plugin by default.operator-sdk initThe
command generates aoperator-sdk initfile to be used with Go modules. Thego.modflag is required when creating a project outside of--repo, because generated files require a valid module path.$GOPATH/src/
5.3.2.2.1. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout:
- go.kubebuilder.io/v3
projectName: memcached-operator
repo: github.com/example-inc/memcached-operator
version: "3"
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
sdk.x-openshift.io/v1: {}
5.3.2.2.2. About the Manager Copia collegamentoCollegamento copiato negli appunti!
The main program for the Operator is the
main.go
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the
namespace
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
5.3.2.2.3. About multi-group APIs Copia collegamentoCollegamento copiato negli appunti!
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
$ operator-sdk edit --multigroup=true
This command updates the
PROJECT
domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...
For multi-group projects, the API Go type files are created in the
apis/<group>/<version>/
controllers/<group>/
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
5.3.2.3. Creating an API and controller Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
, version,cache, and kindv1:Memcached$ operator-sdk create api \ --group=cache \ --version=v1 \ --kind=MemcachedWhen prompted, enter
for creating both the resource and controller:yCreate Resource [y/n] y Create Controller [y/n] yExample output
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...
This process generates the
Memcached
api/v1/memcached_types.go
controllers/memcached_controller.go
5.3.2.3.1. Defining the API Copia collegamentoCollegamento copiato negli appunti!
Define the API for the
Memcached
Procedure
Modify the Go type definitions at
to have the followingapi/v1/memcached_types.goandspec:status// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` }Update the generated code for the resource type:
$ make generateTipAfter you modify a
file, you must run the*_types.gocommand to update the generated code for that resource type.make generateThe above Makefile target invokes the
utility to update thecontroller-genfile. This ensures your API Go type definitions implement theapi/v1/zz_generated.deepcopy.gointerface that all Kind types must implement.runtime.Object
5.3.2.3.2. Generating CRD manifests Copia collegamentoCollegamento copiato negli appunti!
After the API is defined with
spec
status
Procedure
Run the following command to generate and update CRD manifests:
$ make manifestsThis Makefile target invokes the
utility to generate the CRD manifests in thecontroller-genfile.config/crd/bases/cache.example.com_memcacheds.yaml
5.3.2.3.2.1. About OpenAPI validation Copia collegamentoCollegamento copiato negli appunti!
OpenAPIv3 schemas are added to CRD manifests in the
spec.validation
Markers, or annotations, are available to configure validations for your API. These markers always have a
+kubebuilder:validation
5.3.2.4. Implementing the controller Copia collegamentoCollegamento copiato negli appunti!
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
with following example implementation:controllers/memcached_controller.goExample 5.1. Example
memcached_controller.go/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "reflect" "context" "github.com/go-logr/logr" "k8s.io/apimachinery/pkg/runtime" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" ctrllog "sigs.k8s.io/controller-runtime/pkg/log" cachev1 "github.com/example-inc/memcached-operator/api/v1" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues("memcached", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info("Memcached resource not found. Ignoring since object must be deleted") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, "Failed to get Memcached") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, "Failed to get Deployment") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, "Failed to update Memcached status") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: "memcached:1.4.36-alpine", Name: "memcached", Command: []string{"memcached", "-m=64", "-o", "modern", "-v"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: "memcached", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{"app": "memcached", "memcached_cr": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }The example controller runs the following reconciliation logic for each
custom resource (CR):Memcached- Create a Memcached deployment if it does not exist.
-
Ensure that the deployment size is the same as specified by the CR spec.
Memcached -
Update the CR status with the names of the
Memcachedpods.memcached
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.3.2.4.1. Resources watched by the controller Copia collegamentoCollegamento copiato negli appunti!
The
SetupWithManager()
controllers/memcached_controller.go
import (
...
appsv1 "k8s.io/api/apps/v1"
...
)
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
NewControllerManagedBy()
For(&cachev1.Memcached{})
Memcached
Memcached
Request
Memcached
Owns(&appsv1.Deployment{})
Deployment
Deployment
Memcached
5.3.2.4.2. Controller configurations Copia collegamentoCollegamento copiato negli appunti!
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
option, which defaults toMaxConcurrentReconciles:1func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }- Filter watch events using predicates.
-
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the handler to transform a watch event into an arbitrary set of reconcile requests.
EnqueueRequestsFromMapFunc
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
5.3.2.4.3. Reconcile loop Copia collegamentoCollegamento copiato negli appunti!
Every controller has a reconciler object with a
Reconcile()
Request
Memcached
import (
ctrl "sigs.k8s.io/controller-runtime"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
...
)
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Lookup the Memcached instance for this reconcile request
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
...
}
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
// Reconcile successful - don't requeue
return ctrl.Result{}, nil
// Reconcile failed due to error - requeue
return ctrl.Result{}, err
// Requeue for any reason other than an error
return ctrl.Result{Requeue: true}, nil
You can set the
Result.RequeueAfter
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return
Result
RequeueAfter
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
5.3.2.4.4. Permissions and RBAC manifests Copia collegamentoCollegamento copiato negli appunti!
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
...
}
The
ClusterRole
config/rbac/role.yaml
controller-gen
make manifests
5.3.2.5. Enabling proxy support Copia collegamentoCollegamento copiato negli appunti!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses
HTTP_PROXY
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
file to include the following:controllers/memcached_controller.goImport the
package from theproxyoperator-liblibrary:import ( ... "github.com/operator-framework/operator-lib/proxy" )Add the
helper function to the reconcile loop and append the results to the Operand environments:proxy.ReadProxyVarsFromEnvfor i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) } ...
Set the environment variable on the Operator deployment by adding the following to the
file:config/manager/manager.yamlcontainers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.3.2.6. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.
5.3.2.6.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... 2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {"reconciler group": "cache.example.com", "reconciler kind": "Memcached"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}
5.3.2.6.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacRun the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.3.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.3.2.6.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.11+ installed
oc - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.3.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.run bundleThis command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required resources, including RBAC.InstallPlan
5.3.2.7. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the CR, installed on a cluster
Memcached
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project memcached-operator-systemEdit the sample
CR manifest atMemcachedto contain the following specification:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Memcached$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get memcached/memcached-sample -o yamlExample output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/cache_v1_memcached.yamlfield in thespec.sizeCR fromMemcachedto3:5$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yamlClean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.3.3. Project layout for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk
5.3.3.1. Go-based project layout Copia collegamentoCollegamento copiato negli appunti!
Go-based Operator projects, the default type, generated using the
operator-sdk init
| File or directory | Purpose |
|---|---|
|
| Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the
|
|
| Directory tree that defines the APIs of the CRDs. You must edit the
|
|
| Controller implementations. Edit the
|
|
| Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates. |
|
| Targets used to build and deploy your controller. |
|
| Instructions used by a container engine to build your Operator. |
|
| Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment. |
5.3.4. Updating Go-based Operator projects for newer Operator SDK versions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform 4.11 supports Operator SDK 1.22.2. If you already have the 1.16.0 CLI installed on your workstation, you can update the CLI to 1.22.2 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.22.2, update steps are required for the associated breaking changes introduced since 1.16.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.16.0.
5.3.4.1. Updating Go-based Operator projects for Operator SDK 1.22.2 Copia collegamentoCollegamento copiato negli appunti!
The following procedure updates an existing Go-based Operator project for compatibility with 1.22.2.
Prerequisites
- Operator SDK 1.22.2 installed.
- An Operator project created or maintained with Operator SDK 1.16.0.
Procedure
Make the following changes to the
file:config/default/manager_auth_proxy_patch.yaml... spec: template: spec: containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.111 args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" - "--logtostderr=true" - "--v=0"2 ... resources: limits: cpu: 500m memory: 128Mi requests: cpu: 5m memory: 64Mi3 Make the following changes to your
:MakefileEnable support for image digests by adding the following environment variables to your
:MakefileOld
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) ...New
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) # BUNDLE_GEN_FLAGS are the flags passed to the operator-sdk generate bundle command BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endifEdit your
to replace the bundle target with theMakefileenvironment variable:BUNDLE_GEN_FLAGSOld
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)New
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS)Edit your
to updateMakefileto version 1.23.0:opm.PHONY: opm OPM = ./bin/opm opm: ## Download opm locally if necessary. ifeq (,$(wildcard $(OPM))) ifeq (,$(shell which opm 2>/dev/null)) @{ \ set -e ;\ mkdir -p $(dir $(OPM)) ;\ OS=$(shell go env GOOS) && ARCH=$(shell go env GOARCH) && \ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.23.0/$${OS}-$${ARCH}-opm ;\1 chmod +x $(OPM) ;\ } else OPM = $(shell which opm) endif endif- 1
- Replace
v1.19.1withv1.23.0.
Edit your
to replace theMakefiletargets withgo gettargets:go installOld
MakefileCONTROLLER_GEN = $(shell pwd)/bin/controller-gen .PHONY: controller-gen controller-gen: ## Download controller-gen locally if necessary. $(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.8.0) KUSTOMIZE = $(shell pwd)/bin/kustomize .PHONY: kustomize kustomize: ## Download kustomize locally if necessary. $(call go-get-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v3@v3.8.7) ENVTEST = $(shell pwd)/bin/setup-envtest .PHONY: envtest envtest: ## Download envtest-setup locally if necessary. $(call go-get-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest@latest) # go-get-tool will 'go get' any package $2 and install it to $1. PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST)))) define go-get-tool @[ -f $(1) ] || { \ set -e ;\ TMP_DIR=$$(mktemp -d) ;\ cd $$TMP_DIR ;\ go mod init tmp ;\ echo "Downloading $(2)" ;\ GOBIN=$(PROJECT_DIR)/bin go get $(2) ;\ rm -rf $$TMP_DIR ;\ } endefNew
Makefile##@ Build Dependencies ## Location to install dependencies to LOCALBIN ?= $(shell pwd)/bin $(LOCALBIN): mkdir -p $(LOCALBIN) ## Tool Binaries KUSTOMIZE ?= $(LOCALBIN)/kustomize CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen ENVTEST ?= $(LOCALBIN)/setup-envtest ## Tool Versions KUSTOMIZE_VERSION ?= v3.8.7 CONTROLLER_TOOLS_VERSION ?= v0.8.0 KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" .PHONY: kustomize kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary. $(KUSTOMIZE): $(LOCALBIN) curl -s $(KUSTOMIZE_INSTALL_SCRIPT) | bash -s -- $(subst v,,$(KUSTOMIZE_VERSION)) $(LOCALBIN) .PHONY: controller-gen controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary. $(CONTROLLER_GEN): $(LOCALBIN) GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION) .PHONY: envtest envtest: $(ENVTEST) ## Download envtest-setup locally if necessary. $(ENVTEST): $(LOCALBIN) GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latestUpdate
andENVTEST_K8S_VERSIONfields in yourcontroller-gento support Kubernetes 1.24:Makefile... ENVTEST_K8S_VERSION = 1.241 ... sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.02 Apply the changes to your
and rebuild your Operator by entering the following command:Makefile$ make
Make the following changes to the
file to update Go and its dependencies:go.modgo 1.181 require ( github.com/onsi/ginkgo v1.16.52 github.com/onsi/gomega v1.18.13 k8s.io/api v0.24.04 k8s.io/apimachinery v0.24.05 k8s.io/client-go v0.24.06 sigs.k8s.io/controller-runtime v0.12.17 )Download and clean up the dependencies by entering the following command:
$ go mod tidyIf you use the
andapi/webhook_suitetest.gosuite test files, make the following changes:controllers/suite_test.goOld suite test file
cfg, err := testEnv.Start()New suite test file
var err error // cfg is defined in this file globally. cfg, err = testEnv.Start()If you use the Kubernetes declarative plugin, update your Dockerfile with the following changes:
Add the following changes below the line that begins
:COPY controllers/ controllers/# https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/blob/master/docs/addon/walkthrough/README.md#adding-a-manifest # Stage channels and make readable COPY channels/ /channels/ RUN chmod -R a+rx /channels/Add the following changes below the line that begins
:COPY --from=builder /workspace/manager .# copy channels COPY --from=builder /channels /channels
5.4. Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.4.1. Getting started with Operator SDK for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.4.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Ansible v2.9.0
- Ansible Runner v2.0.2+
- Ansible Runner HTTP Event Emitter plugin v1.0.0+
- Python 3.8.6+
- OpenShift Python client v0.12.0+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.1.2. Creating and deploying Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operatorChange into the project directory:
$ cd memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:ansible$ operator-sdk init \ --plugins=ansible \ --domain=example.com
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 - 1
- Generates an Ansible role for the API.
Build and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemExample output
... I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612547325.8819902,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612547325.98242,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612547325.9824686,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4} {"level":"info","ts":1612547348.8311093,"logger":"runner","msg":"Ansible-runner exited successfully","job":"4037200794235010051","name":"memcached-sample","namespace":"memcached-operator-system"}
Delete a CR
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached -n memcached-operator-systemClean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.4.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Ansible-based Operators for a more in-depth walkthrough on building an Ansible-based Operator.
5.4.2. Operator SDK tutorial for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:
- Create a Memcached deployment
-
Ensure that the deployment size is the same as specified by the custom resource (CR) spec
Memcached -
Update the CR status using the status writer with the names of the
Memcachedpodsmemcached
This process is accomplished by using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.
5.4.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Ansible v2.9.0
- Ansible Runner v2.0.2+
- Ansible Runner HTTP Event Emitter plugin v1.0.0+
- Python 3.8.6+
- OpenShift Python client v0.12.0+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operatorChange to the directory:
$ cd $HOME/projects/memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:ansible$ operator-sdk init \ --plugins=ansible \ --domain=example.com
5.4.2.2.1. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout:
- ansible.sdk.operatorframework.io/v1
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
sdk.x-openshift.io/v1: {}
projectName: memcached-operator
version: "3"
5.4.2.3. Creating an API Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a Memcached API.
Procedure
Run the following command to create an API with group
, version,cache, and kindv1:Memcached$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 - 1
- Generates an Ansible role for the API.
After creating the API, your Operator project updates with the following structure:
- Memcached CRD
-
Includes a sample
Memcachedresource - Manager
Program that reconciles the state of the cluster to the desired state by using:
- A reconciler, either an Ansible role or playbook
-
A file, which connects the
watches.yamlresource to theMemcachedAnsible rolememcached
5.4.2.4. Modifying the manager Copia collegamentoCollegamento copiato negli appunti!
Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a
Memcached
Procedure
Update the
file with the following structure:roles/memcached/tasks/main.yml--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: "{{size}}" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: "docker.io/memcached:1.4.36-alpine" ports: - containerPort: 11211This
role ensures amemcacheddeployment exist and sets the deployment size.memcachedSet default values for variables used in your Ansible role by editing the
file:roles/memcached/defaults/main.yml--- # defaults file for Memcached size: 1Update the
sample resource in theMemcachedfile with the following structure:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.
The names of all variables in the
spec
serviceAccount
service_account
You can disable this case conversion by setting the
snakeCaseParameters
false
watches.yaml
5.4.2.5. Enabling proxy support Copia collegamentoCollegamento copiato negli appunti!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses
HTTP_PROXY
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Add the environment variables to the deployment by updating the
file with the following:roles/memcached/tasks/main.yml... env: - name: HTTP_PROXY value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}' - name: http_proxy value: '{{ lookup("env", "HTTP_PROXY") | default("", True) }}' ...Set the environment variable on the Operator deployment by adding the following to the
file:config/manager/manager.yamlcontainers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.4.2.6. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.4.2.6.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... {"level":"info","ts":1612589622.7888272,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612589622.7897573,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612589622.789971,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612589622.7899997,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612589622.8904517,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612589622.8905244,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}
5.4.2.6.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacRun the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.4.2.6.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.4.2.6.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.11+ installed
oc - Operator project initialized by using the Operator SDK
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.4.2.6.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.run bundleThis command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required resources, including RBAC.InstallPlan
5.4.2.7. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the CR, installed on a cluster
Memcached
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project memcached-operator-systemEdit the sample
CR manifest atMemcachedto contain the following specification:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Memcached$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get memcached/memcached-sample -o yamlExample output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/cache_v1_memcached.yamlfield in thespec.sizeCR fromMemcachedto3:5$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached.yamlClean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.4.3. Project layout for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk
5.4.3.1. Ansible-based project layout Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operator projects generated using the
operator-sdk init --plugins ansible
| File or directory | Purpose |
|---|---|
|
| Dockerfile for building the container image for the Operator. |
|
| Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
|
| YAML file containing metadata information for the Operator. |
|
| Base CRD files and the
|
|
| Collects all Operator manifests for deployment. Use by the
|
|
| Controller manager deployment. |
|
|
|
|
| Role and role binding for leader election and authentication proxy. |
|
| Sample resources created for the CRDs. |
|
| Sample configurations for testing. |
|
| A subdirectory for the playbooks to run. |
|
| Subdirectory for the roles tree to run. |
|
| Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the
|
|
| YAML file containing the Ansible collections and role dependencies to install during a build. |
|
| Molecule scenarios for end-to-end testing of your role and Operator. |
5.4.4. Updating projects for newer Operator SDK versions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform 4.11 supports Operator SDK 1.22.2. If you already have the 1.16.0 CLI installed on your workstation, you can update the CLI to 1.22.2 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.22.2, update steps are required for the associated breaking changes introduced since 1.16.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.16.0.
5.4.4.1. Updating Ansible-based Operator projects for Operator SDK 1.22.2 Copia collegamentoCollegamento copiato negli appunti!
The following procedure updates an existing Ansible-based Operator project for compatibility with 1.22.2.
Prerequisites
- Operator SDK 1.22.2 installed.
- An Operator project created or maintained with Operator SDK 1.16.0.
Procedure
Make the following changes to the
file:config/default/manager_auth_proxy_patch.yaml... spec: template: spec: containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.111 args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" - "--logtostderr=true" - "--v=0"2 ... resources: limits: cpu: 500m memory: 128Mi requests: cpu: 5m memory: 64Mi3 Make the following changes to your
:MakefileEnable support for image digests by adding the following environment variables to your
:MakefileOld
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) ...New
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) # BUNDLE_GEN_FLAGS are the flags passed to the operator-sdk generate bundle command BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endifEdit your
to replace the bundle target with theMakefileenvironment variable:BUNDLE_GEN_FLAGSOld
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)New
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS)Edit your
to updateMakefileto version 1.23.0:opm.PHONY: opm OPM = ./bin/opm opm: ## Download opm locally if necessary. ifeq (,$(wildcard $(OPM))) ifeq (,$(shell which opm 2>/dev/null)) @{ \ set -e ;\ mkdir -p $(dir $(OPM)) ;\ OS=$(shell go env GOOS) && ARCH=$(shell go env GOARCH) && \ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.23.0/$${OS}-$${ARCH}-opm ;\1 chmod +x $(OPM) ;\ } else OPM = $(shell which opm) endif endif- 1
- Replace
v1.19.1withv1.23.0.
Apply the changes to your
and rebuild your Operator by entering the following command:Makefile$ make
Update the image tag in your Operator’s Dockerfile as shown in the following example:
Example Dockerfile
FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.111 - 1
- Update the version tag to
v4.11.
Update your
file as shown in the following example:requirements.ymlcollections: - name: community.kubernetes version: "2.0.1"1 - name: operator_sdk.util version: "0.4.0"2 - name: kubernetes.core version: "2.3.1"3 - name: cloud.common4 version: "2.1.1"ImportantAs of version 2.0.0, the
collection was renamed tocommunity.kubernetes. Thekubernetes.corecollection has been replaced by deprecated redirects tocommunity.kubernetes. If you use fully qualified collection names (FQCNs) that begin withkubernetes.core, you must update the FQCNs to usecommunity.kubernetes.kubernetes.core
5.4.5. Ansible support in Operator SDK Copia collegamentoCollegamento copiato negli appunti!
5.4.5.1. Custom resource files Copia collegamentoCollegamento copiato negli appunti!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the
|
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package
|
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
name: "example"
annotations:
ansible.operator-sdk/reconcile-period: "30s"
5.4.5.2. watches.yaml file Copia collegamentoCollegamento copiato negli appunti!
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The
watches.yaml
/opt/ansible/watches.yaml
| Field | Description |
|---|---|
|
| Group of CR to watch. |
|
| Version of CR to watch. |
|
| Kind of CR to watch |
|
| Path to the Ansible role added to the container. For example, if your
|
|
| Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the
|
|
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
| When set to
|
Example watches.yaml file
- version: v1alpha1
group: test1.example.com
kind: Test1
role: /opt/ansible/roles/Test1
- version: v1alpha1
group: test2.example.com
kind: Test2
playbook: /opt/ansible/playbook.yml
- version: v1alpha1
group: test3.example.com
kind: Test3
playbook: /opt/ansible/test3.yml
reconcilePeriod: 0
manageStatus: false
5.4.5.2.1. Advanced options Copia collegamentoCollegamento copiato negli appunti!
Advanced features can be enabled by adding them to your
watches.yaml
group
version
kind
playbook
role
Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.
| Feature | YAML key | Description | Annotation for override | Default value |
|---|---|---|---|---|
| Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
| Manage status |
| Allows the Operator to manage the
|
| |
| Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
| Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
| Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example watches.yml file with advanced options
- version: v1alpha1
group: app.example.com
kind: AppService
playbook: /opt/ansible/playbook.yml
maxRunnerArtifacts: 30
reconcilePeriod: 5s
manageStatus: False
watchDependentResources: False
5.4.5.3. Extra variables sent to Ansible Copia collegamentoCollegamento copiato negli appunti!
Extra variables can be sent to Ansible, which are then managed by the Operator. The
spec
ansible-playbook
The Operator also passes along additional variables under the
meta
For the following CR example:
apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
name: "example"
spec:
message: "Hello world 2"
newParameter: "newParam"
The structure passed to Ansible as extra variables is:
{ "meta": {
"name": "<cr_name>",
"namespace": "<cr_namespace>",
},
"message": "Hello world 2",
"new_parameter": "newParam",
"_app_example_com_database": {
<full_crd>
},
}
The
message
newParameter
meta
meta
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.4.5.4. Ansible Runner directory Copia collegamentoCollegamento copiato negli appunti!
Ansible Runner keeps information about Ansible runs in the container. This is located at
/tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>
5.4.6. Kubernetes Collection for Ansible Copia collegamentoCollegamento copiato negli appunti!
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
5.4.6.1. Installing the Kubernetes Collection for Ansible Copia collegamentoCollegamento copiato negli appunti!
You can install the Kubernetes Collection for Ansible on your local workstation.
Procedure
Install Ansible 2.9+:
$ sudo dnf install ansibleInstall the OpenShift python client package:
$ pip3 install openshiftInstall the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
$ ansible-galaxy collection install community.kubernetesIf you have already initialized your Operator, you might have a
file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs therequirements.ymlcollection as well as thecommunity.kubernetescollection, which provides modules and plugins for Operator-specific fuctions.operator_sdk.utilTo install the dependent modules from the
file:requirements.yml$ ansible-galaxy collection install -r requirements.yml
5.4.6.2. Testing the Kubernetes Collection locally Copia collegamentoCollegamento copiato negli appunti!
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Prerequisites
- Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
- Install the Kubernetes Collection for Ansible
Procedure
In your Ansible-based Operator project directory, modify the
file with the Ansible logic that you want. Theroles/<kind>/tasks/main.ymldirectory is created when you use theroles/<kind>/flag while creating an API. The--generate-rolereplaceable matches the kind that you specified for the API.<kind>The following example creates and deletes a config map based on the value of a variable named
:state--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: default1 state: "{{ state }}" ignore_errors: true2 Modify the
file to setroles/<kind>/defaults/main.ymltostateby default:present--- state: presentCreate an Ansible playbook by creating a
file in the top-level of your project directory, and include yourplaybook.ymlrole:<kind>--- - hosts: localhost roles: - <kind>Run the playbook:
$ ansible-playbook playbook.ymlExample output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Verify that the config map was created:
$ oc get configmapsExample output
NAME DATA AGE example-config 0 2m1sRerun the playbook setting
tostate:absent$ ansible-playbook playbook.yml --extra-vars state=absentExample output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Verify that the config map was deleted:
$ oc get configmaps
5.4.6.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.
5.4.7. Using Ansible inside an Operator Copia collegamentoCollegamento copiato negli appunti!
After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the
watches.yaml
5.4.7.1. Custom resource files Copia collegamentoCollegamento copiato negli appunti!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the
|
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package
|
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
name: "example"
annotations:
ansible.operator-sdk/reconcile-period: "30s"
5.4.7.2. Testing an Ansible-based Operator locally Copia collegamentoCollegamento copiato negli appunti!
You can test the logic inside of an Ansible-based Operator running locally by using the
make run
make run
ansible-operator
watches.yaml
~/.kube/config
k8s
You can customize the roles path by setting the environment variable
ANSIBLE_ROLES_PATH
ansible-roles-path
ANSIBLE_ROLES_PATH
{{current directory}}/roles
Prerequisites
- Ansible Runner v2.0.2+
- Ansible Runner HTTP Event Emitter plugin v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
Procedure
Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):
$ make installExample output
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdRun the
command:make run$ make runExample output
/home/user/memcached-operator/bin/ansible-operator run {"level":"info","ts":1612739145.2871568,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} ... {"level":"info","ts":1612739148.347306,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612739148.3488882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612739148.3490262,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612739148.3490646,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612739148.350217,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612739148.3506632,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612739148.350784,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612739148.5511978,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612739148.5512562,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.
NoteConsider an example
CR manifest:config/samples/<gvk>.yamlapiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"Because the
field is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.specCreate an instance of your CR with the default variable
set tostate:present$ oc apply -f config/samples/<gvk>.yamlCheck that the
config map was created:example-config$ oc get configmapsExample output
NAME STATUS AGE example-config Active 3sModify your
file to set theconfig/samples/<gvk>.yamlfield tostate. For example:absentapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absentApply the changes:
$ oc apply -f config/samples/<gvk>.yamlConfirm that the config map is deleted:
$ oc get configmap
5.4.7.3. Testing an Ansible-based Operator on the cluster Copia collegamentoCollegamento copiato negli appunti!
After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is preferred for production use.
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacRun the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.4.7.4. Ansible logs Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.
5.4.7.4.1. Viewing Ansible logs Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Ansible-based Operator running as a deployment on a cluster
Procedure
To view logs from an Ansible-based Operator, run the following command:
$ oc logs deployment/<project_name>-controller-manager \ -c manager \1 -n <namespace>2 Example output
{"level":"info","ts":1612732105.0579333,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.10.1","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} {"level":"info","ts":1612732105.0587437,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {"level":"info","ts":1612732107.768025,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"} {"level":"info","ts":1612732107.768796,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612732107.7688773,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612732107.7688901,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612732107.770032,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator... {"level":"info","ts":1612732107.770202,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612732107.7850506,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612732107.8853772,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612732107.8854098,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}
5.4.7.4.2. Enabling full Ansible results in logs Copia collegamentoCollegamento copiato negli appunti!
You can set the environment variable
ANSIBLE_DEBUG_LOGS
True
Procedure
Edit the
andconfig/manager/manager.yamlfiles to include the following configuration:config/default/manager_auth_proxy_patch.yamlcontainers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"
5.4.7.4.3. Enabling verbose debugging in logs Copia collegamentoCollegamento copiato negli appunti!
While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.
Procedure
Add the
annotation to your custom resource to enable the verbosity level that you want. For example:ansible.sdk.operatorframework.io/verbosityapiVersion: "cache.example.com/v1alpha1" kind: "Memcached" metadata: name: "example-memcached" annotations: "ansible.sdk.operatorframework.io/verbosity": "4" spec: size: 4
5.4.8. Custom resource status management Copia collegamentoCollegamento copiato negli appunti!
5.4.8.1. About custom resource status in Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
status:
conditions:
- ansibleResult:
changed: 3
completion: 2018-12-03T13:45:57.13329
failures: 1
ok: 6
skipped: 0
lastTransitionTime: 2018-12-03T13:45:57Z
message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
113] No route to host>'
reason: Failed
status: "True"
type: Failure
- lastTransitionTime: 2018-12-03T13:46:13Z
message: Running reconciliation
reason: Running
status: "True"
type: Running
Ansible-based Operators also allow Operator authors to supply custom status values with the
k8s_status
operator_sdk.util collection. This allows the author to update the status
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
5.4.8.2. Tracking custom resource status manually Copia collegamentoCollegamento copiato negli appunti!
You can use the
operator_sdk.util
Prerequisites
- Ansible-based Operator project created by using the Operator SDK
Procedure
Update the
file with awatches.yamlfield set tomanageStatus:false- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: falseUse the
Ansible module to update the subresource. For example, to update with keyoperator_sdk.util.k8s_statusand valuetest,datacan be used as shown:operator_sdk.util- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: "{{ ansible_operator_meta.name }}" namespace: "{{ ansible_operator_meta.namespace }}" status: test: dataYou can declare collections in the
file for the role, which is included for scaffolded Ansible-based Operators:meta/main.ymlcollections: - operator_sdk.utilAfter declaring collections in the role meta, you can invoke the
module directly:k8s_statusk8s_status: ... status: key1: value1
5.5. Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.5.1. Getting started with Operator SDK for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.
5.5.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc -
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.1.2. Creating and deploying Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir nginx-operatorChange into the project directory:
$ cd nginx-operatorRun the
command with theoperator-sdk initplugin to initialize the project:helm$ operator-sdk init \ --plugins=helm
Create an API.
Create a simple Nginx API:
$ operator-sdk create api \ --group demo \ --version v1 \ --kind NginxThis API uses the built-in Helm chart boilerplate from the
command.helm createBuild and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Add a security context constraint (SCC).
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the
pod:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCreate a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-system
Delete a CR
Delete a CR by running the following command:
$ oc delete -f config/samples/demo_v1_nginx -n nginx-operator-systemClean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.5.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Helm-based Operators for a more in-depth walkthrough on building a Helm-based Operator.
5.5.2. Operator SDK tutorial for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:
- Create a Nginx deployment
-
Ensure that the deployment size is the same as specified by the custom resource (CR) spec
Nginx -
Update the CR status using the status writer with the names of the
Nginxpodsnginx
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.
5.5.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc -
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called
nginx-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/nginx-operatorChange to the directory:
$ cd $HOME/projects/nginx-operatorRun the
command with theoperator-sdk initplugin to initialize the project:helm$ operator-sdk init \ --plugins=helm \ --domain=example.com \ --group=demo \ --version=v1 \ --kind=NginxNoteBy default, the
plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as thehelmflag, to initialize a project using an existing Helm chart.--helm-chartThe
command creates theinitproject specifically for watching a resource with API versionnginx-operatorand kindexample.com/v1.Nginx-
For Helm-based projects, the command generates the RBAC rules in the
initfile based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.config/rbac/role.yaml
5.5.2.2.1. Existing Helm charts Copia collegamentoCollegamento copiato negli appunti!
Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:
-
--helm-chart -
--helm-chart-repo -
--helm-chart-version
If the
--helm-chart
--group
--version
--kind
| Flag | Value |
|---|---|
|
|
|
|
|
|
|
|
|
|
| Deduced from the specified chart |
If the
--helm-chart
example-chart-1.2.0.tgz
If a custom repository URL is not specified by the
--helm-chart-repo
| Format | Description |
|---|---|
|
| Fetch the Helm chart named
|
|
| Fetch the Helm chart archive at the specified URL. |
If a custom repository URL is specified by
--helm-chart-repo
| Format | Description |
|---|---|
|
| Fetch the Helm chart named
|
If the
--helm-chart-version
--helm-chart-version
--helm-chart
For more details and examples, run:
$ operator-sdk init --plugins helm --help
5.5.2.2.2. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout:
- helm.sdk.operatorframework.io/v1
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
sdk.x-openshift.io/v1: {}
projectName: nginx-operator
resources:
- api:
crdVersion: v1
namespaced: true
domain: example.com
group: demo
kind: Nginx
version: v1
version: "3"
5.5.2.3. Understanding the Operator logic Copia collegamentoCollegamento copiato negli appunti!
For this example, the
nginx-operator
Nginx
- Create an Nginx deployment if it does not exist.
- Create an Nginx service if it does not exist.
- Create an Nginx ingress if it is enabled and does not exist.
-
Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the CR, for example the replica count, image, and service type.
Nginx
By default, the
nginx-operator
Nginx
watches.yaml
# Use the 'create api' subcommand to add watches to this file.
- group: demo
version: v1
kind: Nginx
chart: helm-charts/nginx
# +kubebuilder:scaffold:watch
5.5.2.3.1. Sample Helm chart Copia collegamentoCollegamento copiato negli appunti!
When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for deployment, service, and ingress resources, along with a
NOTES.txt
If you are not already familiar with Helm charts, review the Helm developer documentation.
5.5.2.3.2. Modifying the custom resource spec Copia collegamentoCollegamento copiato negli appunti!
Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the
values.yaml
You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.
Procedure
The
file has a value calledhelm-charts/nginx/values.yamlset toreplicaCountby default. To have two Nginx instances in your deployment, your CR spec must contain1.replicaCount: 2Edit the
file to setconfig/samples/demo_v1_nginx.yaml:replicaCount: 2apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 2Similarly, the default service port is set to
. To use80, edit the8080file to setconfig/samples/demo_v1_nginx.yaml,which adds the service port override:spec.port: 8080apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080
The Helm Operator applies the entire spec as if it was the contents of a values file, just like the
helm install -f ./overrides.yaml
5.5.2.4. Enabling proxy support Copia collegamentoCollegamento copiato negli appunti!
Operator authors can develop Operators that support network proxies. Cluster administrators configure proxy support for the environment variables that are handled by Operator Lifecycle Manager (OLM). To support proxied clusters, your Operator must inspect the environment for the following standard proxy variables and pass the values to Operands:
-
HTTP_PROXY -
HTTPS_PROXY -
NO_PROXY
This tutorial uses
HTTP_PROXY
Prerequisites
- A cluster with cluster-wide egress proxy enabled.
Procedure
Edit the
file to include overrides based on an environment variable by adding thewatches.yamlfield:overrideValues... - group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: $HTTP_PROXY ...Add the
value in theproxy.httpfile:helm-charts/nginx/values.yaml... proxy: http: "" https: "" no_proxy: ""To make sure the chart template supports using the variables, edit the chart template in the
file to contain the following:helm-charts/nginx/templates/deployment.yamlcontainers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: "{{ .Values.proxy.http }}"Set the environment variable on the Operator deployment by adding the following to the
file:config/manager/manager.yamlcontainers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: "HTTP_PROXY" value: "http_proxy_test"
5.5.2.5. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... {"level":"info","ts":1612652419.9289865,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612652419.9296563,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.example.com/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"} {"level":"info","ts":1612652419.929983,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612652419.930015,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting EventSource","source":"kind source: demo.example.com/v1, Kind=Nginx"} {"level":"info","ts":1612652420.2307851,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting Controller"} {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8}
5.5.2.5.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacRun the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.5.2.5.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.11+ installed
oc - Operator project initialized by using the Operator SDK
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.run bundleThis command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required resources, including RBAC.InstallPlan
5.5.2.6. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Nginx Operator, which provides the CR, installed on a cluster
Nginx
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project nginx-operator-systemEdit the sample
CR manifest atNginxto contain the following specification:config/samples/demo_v1_nginx.yamlapiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 3The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the
pod:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCreate the CR:
$ oc apply -f config/samples/demo_v1_nginx.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Nginx$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Nginx pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get nginx/nginx-sample -o yamlExample output
apiVersion: demo.example.com/v1 kind: Nginx metadata: ... name: nginx-sample ... spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/demo_v1_nginx.yamlfield in thespec.sizeCR fromNginxto3:5$ oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m
Delete the CR by running the following command:
$ oc delete -f config/samples/demo_v1_nginx.yamlClean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.5.3. Project layout for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk
5.5.3.1. Helm-based project layout Copia collegamentoCollegamento copiato negli appunti!
Helm-based Operator projects generated using the
operator-sdk init --plugins helm
| File/folders | Purpose |
|---|---|
|
| Kustomize manifests for deploying the Operator on a Kubernetes cluster. |
|
| Helm chart initialized with the
|
|
| Used to build the Operator image with the
|
|
| Group/version/kind (GVK) and Helm chart location. |
|
| Targets used to manage the project. |
|
| YAML file containing metadata information for the Operator. |
5.5.4. Updating Helm-based projects for newer Operator SDK versions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform 4.11 supports Operator SDK 1.22.2. If you already have the 1.16.0 CLI installed on your workstation, you can update the CLI to 1.22.2 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.22.2, update steps are required for the associated breaking changes introduced since 1.16.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.16.0.
5.5.4.1. Updating Helm-based Operator projects for Operator SDK 1.22.2 Copia collegamentoCollegamento copiato negli appunti!
The following procedure updates an existing Helm-based Operator project for compatibility with 1.22.2.
Prerequisites
- Operator SDK 1.22.2 installed.
- An Operator project created or maintained with Operator SDK 1.16.0.
Procedure
Make the following changes to the
file:config/default/manager_auth_proxy_patch.yaml... spec: template: spec: containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.111 args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" - "--logtostderr=true" - "--v=0"2 ... resources: limits: cpu: 500m memory: 128Mi requests: cpu: 5m memory: 64Mi3 Make the following changes to your
:MakefileEnable support for image digests by adding the following environment variables to your
:MakefileOld
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) ...New
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) # BUNDLE_GEN_FLAGS are the flags passed to the operator-sdk generate bundle command BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endifEdit your
to replace the bundle target with theMakefileenvironment variable:BUNDLE_GEN_FLAGSOld
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)New
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS)Edit your
to updateMakefileto version 1.23.0:opm.PHONY: opm OPM = ./bin/opm opm: ## Download opm locally if necessary. ifeq (,$(wildcard $(OPM))) ifeq (,$(shell which opm 2>/dev/null)) @{ \ set -e ;\ mkdir -p $(dir $(OPM)) ;\ OS=$(shell go env GOOS) && ARCH=$(shell go env GOARCH) && \ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.23.0/$${OS}-$${ARCH}-opm ;\1 chmod +x $(OPM) ;\ } else OPM = $(shell which opm) endif endif- 1
- Replace
v1.19.1withv1.23.0.
Apply the changes to your
and rebuild your Operator by entering the following command:Makefile$ make
Update the image tag in your Operator’s Dockerfile as shown in the following example:
Example Dockerfile
FROM registry.redhat.io/openshift4/ose-helm-operator:v4.111 - 1
- Update the version tag to
v4.11.
5.5.5. Helm support in Operator SDK Copia collegamentoCollegamento copiato negli appunti!
5.5.5.1. Helm charts Copia collegamentoCollegamento copiato negli appunti!
One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the
spec
values.yaml
helm install -f values.yaml
For an example of a simple CR called
Tomcat
apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
name: example-app
spec:
replicaCount: 2
The
replicaCount
2
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the
oc
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.
5.5.6. Operator SDK tutorial for Hybrid Helm Operators Copia collegamentoCollegamento copiato negli appunti!
The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the Operator maturity model.
The Hybrid Helm Operator enhances the existing Helm-based support’s abilities through Go APIs. With this hybrid approach of Helm and Go, the Operator SDK enables Operator authors to use the following process:
- Generate a default structure for, or scaffold, a Go API in the same project as Helm.
-
Configure the Helm reconciler in the file of the project, through the libraries provided by the Hybrid Helm Operator.
main.go
The Hybrid Helm Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
This tutorial walks through the following process using the Hybrid Helm Operator:
-
Create a deployment through a Helm chart if it does not exist
Memcached -
Ensure that the deployment size is the same as specified by custom resource (CR) spec
Memcached -
Create a deployment by using the Go API
MemcachedBackup
5.5.6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc -
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.6.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/github.com/example/memcached-operatorChange to the directory:
$ cd $HOME/github.com/example/memcached-operatorRun the
command to initialize the project. Use a domain ofoperator-sdk initso that all API groups areexample.com:<group>.example.com$ operator-sdk init \ --plugins=hybrid.helm.sdk.operatorframework.io \ --project-version="3" \ --domain example.com \ --repo=github.com/example/memcached-operatorThe
command generates the RBAC rules in theinitfile based on the resources that would be deployed by the chart’s default manifests. Verify that the rules generated in theconfig/rbac/role.yamlfile meet your Operator’s permission requirements.config/rbac/role.yaml
Additional resources
- This procedure creates a project structure that is compatible with both Helm and Go APIs. To learn more about the project directory structure, see Project layout.
5.5.6.3. Creating a Helm API Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a Helm API.
Procedure
Run the following command to create a Helm API with group
, versioncache, and kindv1:Memcached$ operator-sdk create api \ --plugins helm.sdk.operatorframework.io/v1 \ --group cache \ --version v1 \ --kind Memcached
This procedure also configures your Operator project to watch the
Memcached
v1
For more details and examples for creating Helm API based on existing or new charts, run the following command:
$ operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help
Additional resources
5.5.6.3.1. Operator logic for the Helm API Copia collegamentoCollegamento copiato negli appunti!
By default, your scaffolded Operator project watches
Memcached
watches.yaml
Example 5.2. Example watches.yaml file
# Use the 'create api' subcommand to add watches to this file.
- group: cache.my.domain
version: v1
kind: Memcached
chart: helm-charts/memcached
#+kubebuilder:scaffold:watch
Additional resources
- For detailed documentation on customizing the Helm Operator logic through the chart, see Understanding the Operator logic.
5.5.6.3.2. Custom Helm reconciler configurations using provided library APIs Copia collegamentoCollegamento copiato negli appunti!
A disadvantage of existing Helm-based Operators is the inability to configure the Helm reconciler, because it is abstracted from users. For a Helm-based Operator to reach the Seamless Upgrades capability (level II and later) that reuses an already existing Helm chart, a hybrid between the Go and Helm Operator types adds value.
The APIs provided in the helm-operator-plugins library allow Operator authors to make the following configurations:
- Customize value mapping based on cluster state
- Execute code in specific events by configuring the reconciler’s event recorder
- Customize the reconciler’s logger
-
Setup ,
Install, andUpgradeannotations to enable Helm’s actions to be configured based on the annotations found in custom resources watched by the reconcilerUninstall -
Configure the reconciler to run with and
PrehooksPost
The above configurations to the reconciler can be done in the
main.go
Example main.go file
// Operator's main.go
// With the help of helpers provided in the library, the reconciler can be
// configured here before starting the controller with this reconciler.
reconciler := reconciler.New(
reconciler.WithChart(*chart),
reconciler.WithGroupVersionKind(gvk),
)
if err := reconciler.SetupWithManager(mgr); err != nil {
panic(fmt.Sprintf("unable to create reconciler: %s", err))
}
5.5.6.4. Creating a Go API Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a Go API.
Procedure
Run the following command to create a Go API with group
, versioncache, and kindv1:MemcachedBackup$ operator-sdk create api \ --group=cache \ --version v1 \ --kind MemcachedBackup \ --resource \ --controller \ --plugins=go/v3When prompted, enter
for creating both resource and controller:y$ Create Resource [y/n] y Create Controller [y/n] y
This procedure generates the
MemcachedBackup
api/v1/memcachedbackup_types.go
controllers/memcachedbackup_controller.go
5.5.6.4.1. Defining the API Copia collegamentoCollegamento copiato negli appunti!
Define the API for the
MemcachedBackup
Represent this Go API by defining the
MemcachedBackup
MemcachedBackupSpec.Size
MemcachedBackupStatus.Nodes
The
Node
Status
Procedure
Define the API for the
CR by modifying the Go type definitions in theMemcachedBackupfile to have the followingapi/v1/memcachedbackup_types.goandspec:statusExample 5.3. Example
api/v1/memcachedbackup_types.gofile// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run "make" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run "make" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` }Update the generated code for the resource type:
$ make generateTipAfter you modify a
file, you must run the*_types.gocommand to update the generated code for that resource type.make generateAfter the API is defined with
andspecfields and CRD validation markers, generate and update the CRD manifests:status$ make manifests
This Makefile target invokes the
controller-gen
config/crd/bases/cache.my.domain_memcachedbackups.yaml
5.5.6.4.2. Controller implementation Copia collegamentoCollegamento copiato negli appunti!
The controller in this tutorial performs the following actions:
-
Create a deployment if it does not exist.
Memcached -
Ensure that the deployment size is the same as specified by the CR spec.
Memcached -
Update the CR status with the names of the
Memcachedpods.memcached
For a detailed explanation on how to configure the controller to perform the above mentioned actions, see Implementing the controller in the Operator SDK tutorial for standard Go-based Operators.
5.5.6.4.3. Differences in main.go Copia collegamentoCollegamento copiato negli appunti!
For standard Go-based Operators and the Hybrid Helm Operator, the
main.go
Manager program for the Go API. For the Hybrid Helm Operator, however, the main.go
watches.yaml
Example 5.4. Example main.go file
...
for _, w := range ws {
// Register controller with the factory
reconcilePeriod := defaultReconcilePeriod
if w.ReconcilePeriod != nil {
reconcilePeriod = w.ReconcilePeriod.Duration
}
maxConcurrentReconciles := defaultMaxConcurrentReconciles
if w.MaxConcurrentReconciles != nil {
maxConcurrentReconciles = *w.MaxConcurrentReconciles
}
r, err := reconciler.New(
reconciler.WithChart(*w.Chart),
reconciler.WithGroupVersionKind(w.GroupVersionKind),
reconciler.WithOverrideValues(w.OverrideValues),
reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources),
reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles),
reconciler.WithReconcilePeriod(reconcilePeriod),
reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...),
reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...),
reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...),
)
...
The manager is initialized with both
Helm
Go
Example 5.5. Example Helm and Go reconcilers
...
// Setup manager with Go API
if err = (&controllers.MemcachedBackupReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "MemcachedBackup")
os.Exit(1)
}
...
// Setup manager with Helm API
for _, w := range ws {
...
if err := r.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Helm")
os.Exit(1)
}
setupLog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
}
// Start the manager
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
5.5.6.4.4. Permissions and RBAC manifests Copia collegamentoCollegamento copiato negli appunti!
The controller requires certain role-based access control (RBAC) permissions to interact with the resources it manages. For the Go API, these are specified with RBAC markers, as shown in the Operator SDK tutorial for standard Go-based Operators.
For the Helm API, the permissions are scaffolded by default in
roles.yaml
roles.yaml
This known issue is being tracked in https://github.com/operator-framework/helm-operator-plugins/issues/142.
The following is an example
role.yaml
Example 5.6. Example Helm and Go reconcilers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- cache.my.domain
resources:
- memcachedbackups
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- cache.my.domain
resources:
- memcachedbackups/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods
- services
- services/finalizers
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- cache.my.domain
resources:
- memcachedbackups/status
verbs:
- get
- patch
- update
- apiGroups:
- policy
resources:
- events
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- cache.my.domain
resources:
- memcacheds
- memcacheds/status
- memcacheds/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
Additional resources
5.5.6.5. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install run
5.5.6.6. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacRun the following command to verify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.6.7. Creating custom resources Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating custom resources (CRs) that are now provided on the cluster by the Operator.
Procedure
Change to the namespace where your Operator is installed:
$ oc project <project_name>-systemUpdate the sample
CR manifest at theMemcachedfile by updating theconfig/samples/cache_v1_memcached.yamlfield toreplicaCount:3Example 5.7. Example
config/samples/cache_v1_memcached.yamlfileapiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: "" image: pullPolicy: IfNotPresent repository: nginx tag: "" imagePullSecrets: [] ingress: annotations: {} className: "" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: "" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: "" tolerations: []Create the
CR:Memcached$ oc apply -f config/samples/cache_v1_memcached.yamlEnsure that the Memcached Operator creates the deployment for the sample CR with the correct size:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18mUpdate the sample
CR manifest at theMemcachedBackupfile by updating theconfig/samples/cache_v1_memcachedbackup.yamltosize:2Example 5.8. Example
config/samples/cache_v1_memcachedbackup.yamlfileapiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2Create the
CR:MemcachedBackup$ oc apply -f config/samples/cache_v1_memcachedbackup.yamlEnsure that the count of
pods is the same as specified in the CR:memcachedbackup$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m-
You can update the in each of the above CRs, and then apply them again. The controller reconciles again and ensures that the size of the pods is as specified in the
specof the respective CRs.spec Clean up the resources that have been created as part of this tutorial:
Delete the
resource:Memcached$ oc delete -f config/samples/cache_v1_memcached.yamlDelete the
resource:MemcachedBackup$ oc delete -f config/samples/cache_v1_memcachedbackup.yamlIf you used the
command to test the Operator, run the following command:make deploy$ make undeploy
5.5.6.8. Project layout Copia collegamentoCollegamento copiato negli appunti!
The Hybrid Helm Operator scaffolding is customized to be compatible with both Helm and Go APIs.
| File/folders | Purpose |
|---|---|
|
| Instructions used by a container engine to build your Operator image with the
|
|
| Build file with helper targets to help you work with your project. |
|
| YAML file containing metadata information for the Operator. Represents the project’s configuration and is used to track useful information for the CLI and plugins. |
|
| Contains useful binaries such as the
|
|
| Contains configuration files, including all Kustomize manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.
|
|
| Contains the Go API definition. |
|
| Contains the controllers for the Go API. |
|
| Contains utility files, such as the file used to scaffold the license header for your project files. |
|
| Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the
|
|
| Contains the Helm charts which can be specified using the
|
|
| Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches. |
5.5.7. Updating Hybrid Helm-based projects for newer Operator SDK versions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform 4.11 supports Operator SDK 1.22.2. If you already have the 1.16.0 CLI installed on your workstation, you can update the CLI to 1.22.2 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK 1.22.2, update steps are required for the associated breaking changes introduced since 1.16.0. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with 1.16.0.
5.5.7.1. Updating Hybrid Helm-based Operator projects for Operator SDK 1.22.2 Copia collegamentoCollegamento copiato negli appunti!
The following procedure updates an existing Hybrid Helm-based Operator project for compatibility with 1.22.2.
Prerequisites
- Operator SDK 1.22.2 installed.
- An Operator project created or maintained with Operator SDK 1.16.0.
Procedure
Make the following changes to the
file:config/default/manager_auth_proxy_patch.yaml... spec: template: spec: containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.111 args: - "--secure-listen-address=0.0.0.0:8443" - "--upstream=http://127.0.0.1:8080/" - "--logtostderr=true" - "--v=0"2 ... resources: limits: cpu: 500m memory: 128Mi requests: cpu: 5m memory: 64Mi3 Make the following changes to your
:MakefileEnable support for image digests by adding the following environment variables to your
:MakefileOld
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) ...New
MakefileBUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION) # BUNDLE_GEN_FLAGS are the flags passed to the operator-sdk generate bundle command BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endifEdit your
to replace the bundle target with theMakefileenvironment variable:BUNDLE_GEN_FLAGSOld
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)New
Makefile$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS)Edit your
to updateMakefileto version 1.23.0:opm.PHONY: opm OPM = ./bin/opm opm: ## Download opm locally if necessary. ifeq (,$(wildcard $(OPM))) ifeq (,$(shell which opm 2>/dev/null)) @{ \ set -e ;\ mkdir -p $(dir $(OPM)) ;\ OS=$(shell go env GOOS) && ARCH=$(shell go env GOARCH) && \ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.23.0/$${OS}-$${ARCH}-opm ;\1 chmod +x $(OPM) ;\ } else OPM = $(shell which opm) endif endif- 1
- Replace
v1.19.1withv1.23.0.
Update
andENVTEST_K8S_VERSIONfields in yourcontroller-gento support Kubernetes 1.24:Makefile... ENVTEST_K8S_VERSION = 1.241 ... sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.02 Apply the changes to your
and rebuild your Operator by entering the following command:Makefile$ make
Make the following changes to the
file to update Go and its dependencies:go.modgo 1.181 require ( github.com/onsi/ginkgo v1.16.52 github.com/onsi/gomega v1.18.13 k8s.io/api v0.24.04 k8s.io/apimachinery v0.24.05 k8s.io/client-go v0.24.06 sigs.k8s.io/controller-runtime v0.12.17 )Edit your
file to update the Helm Operator plugins:go.modgithub.com/operator-framework/helm-operator-plugins v0.0.111 - 1
- Update version
v0.0.8tov0.0.11.
Make the following changes to your Dockerfile to update Go to version 1.18:
Old
dockerfile.gofileconst dockerfileTemplate = `# Build the manager binary FROM golang:1.17 as builderNew
dockerfile.gofileconst dockerfileTemplate = `# Build the manager binary FROM golang:1.18 as builderDownload and clean up the dependencies by entering the following command:
$ go mod tidy
5.6. Java-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.6.1. Getting started with Operator SDK for Java-based Operators Copia collegamentoCollegamento copiato negli appunti!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To demonstrate the basics of setting up and running a Java-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Java-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.6.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Java v11+
- Maven v3.6.3+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.1.2. Creating and deploying Java-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Java-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operatorChange into the project directory:
$ cd memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:quarkus$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --plugins quarkus \ --group cache \ --version v1 \ --kind MemcachedBuild and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Delete a CR
Delete a CR by running the following command:
$ oc delete -f config/samples/cache_v1_memcached -n memcached-operator-systemClean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.6.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Java-based Operators for a more in-depth walkthrough on building a Java-based Operator.
5.6.2. Operator SDK tutorial for Java-based Operators Copia collegamentoCollegamento copiato negli appunti!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andjava-operator-sdklibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators.
5.6.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI () v4.11+ installed
oc - Java v11+
- Maven v3.6.3+
-
Logged into an OpenShift Container Platform 4.11 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operatorChange to the directory:
$ cd $HOME/projects/memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:quarkus$ operator-sdk init \ --plugins=quarkus \ --domain=example.com \ --project-name=memcached-operator
5.6.2.2.1. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout:
- quarkus.javaoperatorsdk.io/v1-alpha
projectName: memcached-operator
version: "3"
5.6.2.3. Creating an API and controller Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API:
$ operator-sdk create api \ --plugins=quarkus \1 --group=cache \2 --version=v1 \3 --kind=Memcached4
Verification
Run the
command to view the file structure:tree$ treeExample output
. ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files
5.6.2.3.1. Defining the API Copia collegamentoCollegamento copiato negli appunti!
Define the API for the
Memcached
Procedure
Edit the following files that were generated as part of the
process:create apiUpdate the following attributes in the
file to define the desired state of theMemcachedSpec.javaCR:Memcachedpublic class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }Update the following attributes in the
file to define the observed state of theMemcachedStatus.javaCR:MemcachedNoteThe example below illustrates a Node status field. It is recommended that you use typical status properties in practice.
import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }Update the
file to define the Schema for Memcached APIs that extends to bothMemcached.javaandMemcachedSpec.javafiles.MemcachedStatus.java@Version("v1") @Group("cache.example.com") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}
5.6.2.3.2. Generating CRD manifests Copia collegamentoCollegamento copiato negli appunti!
After the API is defined with
MemcachedSpec
MemcachedStatus
Procedure
Run the following command from the
directory to generate the CRD:memcached-operator$ mvn clean install
Verification
Verify the contents of the CRD in the
file as shown in the following example:target/kubernetes/memcacheds.cache.example.com-v1.yml$ cat target/kubernetes/memcacheds.cache.example.com-v1.yamlExample output
# Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}
5.6.2.3.3. Creating a Custom Resource Copia collegamentoCollegamento copiato negli appunti!
After generating the CRD manifests, you can create the Custom Resource (CR).
Procedure
Create a Memcached CR called
:memcached-sample.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1
5.6.2.4. Implementing the controller Copia collegamentoCollegamento copiato negli appunti!
After creating a new API and controller, you can implement the controller logic.
Procedure
Append the following dependency to the
file:pom.xml<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>For this example, replace the generated controller file
with following example implementation:MemcachedReconciler.javaExample 5.9. Example
MemcachedReconciler.javapackage com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put("app", "memcached"); labels.put("memcached_cr", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage("memcached:1.4.36-alpine") .withName("memcached") .withCommand("memcached", "-m=64", "-o", "modern", "-v") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName("memcached") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }The example controller runs the following reconciliation logic for each
custom resource (CR):Memcached- Creates a Memcached deployment if it does not exist.
-
Ensures that the deployment size matches the size specified by the CR spec.
Memcached -
Updates the CR status with the names of the
Memcachedpods.memcached
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.6.2.4.1. Reconcile loop Copia collegamentoCollegamento copiato negli appunti!
Every controller has a reconciler object with a
method that implements the reconcile loop. The reconcile loop is passed theReconcile()argument, as shown in the following example:DeploymentDeployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();As shown in the following example, if the
isDeployment, the deployment needs to be created. After you create thenull, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value ofDeployment, otherwise, return the value of `UpdateControl.updateStatus(resource):UpdateControl.noUpdate()if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }After getting the
, get the current and required replicas, as shown in the following example:Deploymentint currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();If
does not match thecurrentReplicas, you must update therequiredReplicas, as shown in the following example:Deploymentif (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }The following example shows how to obtain the list of pods and their names:
List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:
if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }
5.6.2.4.2. Defining labelsForMemcached Copia collegamentoCollegamento copiato negli appunti!
labelsForMemcached
private Map<String, String> labelsForMemcached(Memcached m) {
Map<String, String> labels = new HashMap<>();
labels.put("app", "memcached");
labels.put("memcached_cr", m.getMetadata().getName());
return labels;
}
5.6.2.4.3. Define the createMemcachedDeployment Copia collegamentoCollegamento copiato negli appunti!
The
createMemcachedDeployment
DeploymentBuilder
private Deployment createMemcachedDeployment(Memcached m) {
Deployment deployment = new DeploymentBuilder()
.withMetadata(
new ObjectMetaBuilder()
.withName(m.getMetadata().getName())
.withNamespace(m.getMetadata().getNamespace())
.build())
.withSpec(
new DeploymentSpecBuilder()
.withReplicas(m.getSpec().getSize())
.withSelector(
new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
.withTemplate(
new PodTemplateSpecBuilder()
.withMetadata(
new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
.withSpec(
new PodSpecBuilder()
.withContainers(
new ContainerBuilder()
.withImage("memcached:1.4.36-alpine")
.withName("memcached")
.withCommand("memcached", "-m=64", "-o", "modern", "-v")
.withPorts(
new ContainerPortBuilder()
.withContainerPort(11211)
.withName("memcached")
.build())
.build())
.build())
.build())
.build())
.build();
deployment.addOwnerReference(m);
return deployment;
}
5.6.2.5. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.5.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to compile the Operator:
$ mvn clean installExample output
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.ymlExample output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCreate a file called
as shown in the following example:rbac.yamlapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""Run the following command to grant
privileges to thecluster-adminby applying thememcached-quarkus-operator-operatorfile:rbac.yaml$ oc apply -f rbac.yamlEnter the following command to run the Operator:
$ java -jar target/quarkus-app/quarkus-run.jarNoteThe
command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.javaApply the
file with the following command:memcached-sample.yaml$ kubectl apply -f memcached-sample.yamlExample output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm that the pod has started:
$ oc get allExample output
NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
5.6.2.5.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to install the CRD to the default namespace:
$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.ymlExample output
customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCreate a file called
as shown in the following example:rbac.yamlapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""ImportantThe
file will be applied at a later step.rbac.yamlRun the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Run the following command to grant
privileges to thecluster-adminby applying thememcached-quarkus-operator-operatorfile created in a previous step:rbac.yaml$ oc apply -f rbac.yamlRun the following command to verify that the Operator is running:
$ oc get all -n defaultExample output
NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18sRun the following command to apply the
and create thememcached-sample.yamlpod:memcached-sample$ oc apply -f memcached-sample.yamlExample output
memcached.cache.example.com/memcached-sample created
Verification
Run the following command to confirm the pods have started:
$ oc get allExample output
NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s
5.6.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.6.2.5.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.11+ installed
oc - Operator project initialized by using the Operator SDK
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.6.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.run bundleThis command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required resources, including RBAC.InstallPlan
5.6.3. Project layout for Java-based Operators Copia collegamentoCollegamento copiato negli appunti!
Java-based Operator SDK is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The
operator-sdk
5.6.3.1. Java-based project layout Copia collegamentoCollegamento copiato negli appunti!
Java-based Operator projects generated by the
operator-sdk init
| File or directory | Purpose |
|---|---|
|
| File that contains the dependencies required to run the Operator. |
|
| Directory that contains the files that represent the API. If the domain is
|
|
| Java file that defines controller implementations. |
|
| Java file that defines the desired state of the Memcached CR. |
|
| Java file that defines the observed state of the Memcached CR. |
|
| Java file that defines the Schema for Memcached APIs. |
|
| Directory that contains the CRD yaml files. |
5.7. Defining cluster service versions (CSVs) Copia collegamentoCollegamento copiato negli appunti!
A cluster service version (CSV), defined by a
ClusterServiceVersion
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
5.7.1. How CSV generation works Copia collegamentoCollegamento copiato negli appunti!
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the
generate bundle
Typically, the
generate kustomize manifests
generate bundle
make bundle
-
generate kustomize manifests -
generate bundle -
bundle validate
5.7.1.1. Generated files and resources Copia collegamentoCollegamento copiato negli appunti!
The
make bundle
-
A bundle manifests directory named that contains a
bundle/manifests(CSV) objectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
The following resources are typically included in a CSV:
- Role
- Defines Operator permissions within a namespace.
- ClusterRole
- Defines cluster-wide Operator permissions.
- Deployment
- Defines how an Operand of an Operator is run in pods.
- CustomResourceDefinition (CRD)
- Defines custom resources that your Operator reconciles.
- Custom resource examples
- Examples of resources adhering to the spec of a particular CRD.
5.7.1.2. Version management Copia collegamentoCollegamento copiato negli appunti!
The
--version
generate bundle
By setting the
VERSION
Makefile
--version
generate bundle
make bundle
5.7.2. Manually-defined CSV fields Copia collegamentoCollegamento copiato negli appunti!
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
| Field | Description |
|---|---|
|
| A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example
|
|
| The capability level according to the Operator maturity model. Options include
|
|
| A public name to identify the Operator. |
|
| A short description of the functionality of the Operator. |
|
| Keywords describing the Operator. |
|
| Human or organizational entities maintaining the Operator, with a
|
|
| The provider of the Operator (usually an organization), with a
|
|
| Key-value pairs to be used by Operator internals. |
|
| Semantic version of the Operator, for example
|
|
| Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
| Field | Description |
|---|---|
|
| The name of the CSV being replaced by this CSV. |
|
| URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a
|
|
| Selectors by which the Operator can pair resources in a cluster. |
|
| A base64-encoded icon unique to the Operator, set in a
|
|
| The level of maturity the software has achieved at this version. Options include
|
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.
5.7.2.1. Operator metadata annotations Copia collegamentoCollegamento copiato negli appunti!
Operator developers can manually define certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub.
The following table lists Operator metadata annotations that can be manually defined using
metadata.annotations
| Field | Description |
|---|---|
|
| Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
| Specify a single required custom resource by adding
|
|
| Set a suggested namespace where the Operator should be deployed. |
|
| Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:
Important The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
|
|
| Free-form array for listing any specific subscriptions that are required to use the Operator. For example,
|
|
| Hides CRDs in the UI that are not meant for user manipulation. |
Example use cases
Operator supports disconnected and proxy-aware
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
Operator requires an OpenShift Container Platform license
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
Operator requires a 3scale license
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
Operator supports disconnected and proxy-aware, and requires an OpenShift Container Platform license
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
5.7.3. Enabling your Operator for restricted network environments Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Operator requirements for supporting disconnected mode
- Replace hard-coded image references with environment variables.
In the cluster service version (CSV) of your Operator:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
- All dependencies of your Operator must also support running in a disconnected mode.
- Your Operator must not require any off-cluster resources.
Prerequisites
- An Operator project with a CSV. The following procedure uses the Memcached Operator as an example for Go-, Ansible-, and Helm-based projects.
Procedure
Set an environment variable for the additional image references used by the Operator in the
file:config/manager/manager.yamlExample 5.10. Example
config/manager/manager.yamlfile... spec: ... spec: ... containers: - command: - /manager ... env: - name: <related_image_environment_variable>1 value: "<related_image_reference_with_tag>"2 Replace hard-coded image references with environment variables in the relevant file for your Operator project type:
For Go-based Operator projects, add the environment variable to the
file as shown in the following example:controllers/memcached_controller.goExample 5.11. Example
controllers/memcached_controller.gofile// deploymentForMemcached returns a memcached Deployment object ... Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: "memcached:1.4.36-alpine",1 + Image: os.Getenv("<related_image_environment_variable>"),2 Name: "memcached", Command: []string{"memcached", "-m=64", "-o", "modern", "-v"}, Ports: []corev1.ContainerPort{{ ...NoteThe
function returns an empty string if a variable is not set. Set theos.Getenvbefore changing the file.<related_image_environment_variable>For Ansible-based Operator projects, add the environment variable to the
file as shown in the following example:roles/memcached/tasks/main.ymlExample 5.12. Example
roles/memcached/tasks/main.ymlfilespec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: "docker.io/memcached:1.4.36-alpine"1 + image: "{{ lookup('env', '<related_image_environment_variable>') }}"2 ports: - containerPort: 11211 ...For Helm-based Operator projects, add the
field to theoverrideValuesfile as shown in the following example:watches.yamlExample 5.13. Example
watches.yamlfile... - group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues:1 relatedImage: ${<related_image_environment_variable>}2 Add the value of the
field to theoverrideValuesfile as shown in the following example:helm-charts/memchached/values.yamlExample
helm-charts/memchached/values.yamlfile... relatedImage: ""Edit the chart template in the
file as shown in the following example:helm-charts/memcached/templates/deployment.yamlExample 5.14. Example
helm-charts/memcached/templates/deployment.yamlfilecontainers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: "{{ .Values.image.pullPolicy }} env:1 - name: related_image2 value: "{{ .Values.relatedImage }}"3
Add the
variable definition to yourBUNDLE_GEN_FLAGSwith the following changes:MakefileExample
MakefileBUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq ($(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif ... - $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)1 + $(KUSTOMIZE) build config/manifests | operator-sdk generate bundle $(BUNDLE_GEN_FLAGS)2 ...To update your Operator image to use a digest (SHA) and not a tag, run the
command and setmake bundletoUSE_IMAGE_DIGESTS:true$ make bundle USE_IMAGE_DIGESTS=trueAdd the
annotation, which indicates that the Operator works in a disconnected environment:disconnectedmetadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'Operators can be filtered in OperatorHub by this infrastructure feature.
5.7.4. Enabling your Operator for multiple architectures and operating systems Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the
PackageManifest
If a CSV does not include an
os
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an
arch
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in the
of your CSV for each supported architecture and operating system that your Operator supports:metadata.labelslabels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported1 operatorframework.io/arch.amd64: supported2
5.7.4.1. Architecture and operating system support for Operators Copia collegamentoCollegamento copiato negli appunti!
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
| Architecture | String |
|---|---|
| AMD64 |
|
| 64-bit PowerPC little-endian |
|
| IBM Z |
|
| Operating system | String |
|---|---|
| Linux |
|
| z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
5.7.5. Setting a suggested namespace Copia collegamentoCollegamento copiato negli appunti!
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
annotation to your suggested namespace:operatorframework.io/suggested-namespacemetadata: annotations: operatorframework.io/suggested-namespace: <namespace>1 - 1
- Set your suggested namespace.
5.7.6. Enabling Operator conditions Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an
OperatorCondition
OperatorCondition
To support Operator conditions, an Operator must be able to read the
OperatorCondition
- Get the specific condition.
- Set the status of a specific condition.
This can be accomplished by using the operator-lib library. An Operator author can provide a controller-runtime client in their Operator for the library to access the
OperatorCondition
The library provides a generic
Conditions
Get
Set
conditionType
OperatorCondition
Get-
To get the specific condition, the library uses the
client.Getfunction fromcontroller-runtime, which requires anObjectKeyof typetypes.NamespacedNamepresent inconditionAccessor. Set-
To update the status of the specific condition, the library uses the
client.Updatefunction fromcontroller-runtime. An error occurs if theconditionTypeis not present in the CRD.
The Operator is allowed to modify only the
status
status.conditions
Operator SDK v1.10.1 supports
operator-lib
Prerequisites
- An Operator project generated using the Operator SDK.
Procedure
To enable Operator conditions in your Operator project:
In the
file of your Operator project, addgo.modas a required library:operator-framework/operator-libmodule github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )Write your own constructor in your Operator logic that will result in the following outcomes:
-
Accepts a client.
controller-runtime -
Accepts a .
conditionType -
Returns a interface to update or add conditions.
Condition
Because OLM currently supports the
condition, you can create an interface that has methods to access theUpgradeablecondition. For example:Upgradeableimport ( ... apiv1 "github.com/operator-framework/api/pkg/operators/v1" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, "apiv1.OperatorUpgradeable") } cond, err := NewUpgradeable(cl);In this example, the
constructor is further used to create a variableNewUpgradeableof typecond. TheConditionvariable would in turn havecondandGetmethods, which can be used for handling the OLMSetcondition.Upgradeable-
Accepts a
5.7.7. Defining webhooks Copia collegamentoCollegamento copiato negli appunti!
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a
webhookdefinitions
- Admission webhooks (validating and mutating)
- Conversion webhooks
Procedure
Add a
section to thewebhookdefinitionssection of the CSV of your Operator and include any webhook definitions using aspecoftype,ValidatingAdmissionWebhook, orMutatingAdmissionWebhook. The following example contains all three types of webhooks:ConversionWebhookCSV containing webhooks
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io1 version: v1 install: spec: deployments: - name: webhook-operator-webhook ... ... ... strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io5 ...
5.7.7.1. Webhook considerations for OLM Copia collegamentoCollegamento copiato negli appunti!
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
-
The field must be set to either
type,ValidatingAdmissionWebhook, orMutatingAdmissionWebhook, or the CSV will be placed in a failed phase.ConversionWebhook -
The CSV must contain a deployment whose name is equivalent to the value supplied in the field of the
deploymentName.webhookdefinition
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
Certificate authority constraints
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
-
The TLS certificate file is mounted to the deployment at .
/apiserver.local.config/certificates/apiserver.crt -
The TLS key file is mounted to the deployment at .
/apiserver.local.config/certificates/apiserver.key
Admission webhook rules constraints
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
- Requests that target all groups
-
Requests that target the group
operators.coreos.com -
Requests that target the or
ValidatingWebhookConfigurationsresourcesMutatingWebhookConfigurations
Conversion webhook constraints
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
-
CSVs featuring a conversion webhook can only support the install mode.
AllNamespaces -
The CRD targeted by the conversion webhook must have its field set to
spec.preserveUnknownFieldsorfalse.nil - The conversion webhook defined in the CSV must target an owned CRD.
- There can only be one conversion webhook on the entire cluster for a given CRD.
5.7.8. Understanding your custom resource definitions (CRDs) Copia collegamentoCollegamento copiato negli appunti!
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
5.7.8.1. Owned CRDs Copia collegamentoCollegamento copiato negli appunti!
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of your CRD. | Required |
|
| The version of that object API. | Required |
|
| The machine readable name of your CRD. | Required |
|
| A human readable version of your CRD name, for example
| Required |
|
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
| The API group that this CRD belongs to, for example
| Optional |
|
| Your CRDs own one or more types of Kubernetes objects. These are listed in the
It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a
MongoDB Standalone
Example owned CRD
- displayName: MongoDB Standalone
group: mongodb.com
kind: MongoDbStandalone
name: mongodbstandalones.mongodb.com
resources:
- kind: Service
name: ''
version: v1
- kind: StatefulSet
name: ''
version: v1beta2
- kind: Pod
name: ''
version: v1
- kind: ConfigMap
name: ''
version: v1
specDescriptors:
- description: Credentials for Ops Manager or Cloud Manager.
displayName: Credentials
path: credentials
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
- description: Project this deployment belongs to.
displayName: Project
path: project
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
- description: MongoDB version to be installed.
displayName: Version
path: version
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:label'
statusDescriptors:
- description: The status of each of the pods for the MongoDB cluster.
displayName: Pod Status
path: pods
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
version: v1
description: >-
MongoDB Deployment consisting of only one host. No replication of
data.
5.7.8.2. Required CRDs Copia collegamentoCollegamento copiato negli appunti!
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of the CRD you require. | Required |
|
| The version of that object API. | Required |
|
| The Kubernetes object kind. | Required |
|
| A human readable version of the CRD. | Required |
|
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
required:
- name: etcdclusters.etcd.database.coreos.com
version: v1beta2
kind: EtcdCluster
displayName: etcd Cluster
description: Represents a cluster of etcd nodes.
5.7.8.3. CRD upgrades Copia collegamentoCollegamento copiato negli appunti!
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.8.3.1. Adding a new CRD version Copia collegamentoCollegamento copiato negli appunti!
Procedure
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the
section of your CSV.versionsFor example, if the current CRD has a version
and you want to add a new versionv1alpha1and mark it as the new storage version, add a new entry forv1beta1:v1beta1versions: - name: v1alpha1 served: true storage: false - name: v1beta11 served: true storage: true- 1
- New entry.
Ensure the referencing version of the CRD in the
section of your CSV is updated if the CSV intends to use the new version:ownedcustomresourcedefinitions: owned: - name: cluster.example.com version: v1beta11 kind: cluster displayName: Cluster- 1
- Update the
version.
- Push the updated CRD and CSV to your bundle.
5.7.8.3.2. Deprecating or removing a CRD version Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the
served
false
Procedure
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions: - name: v1alpha1 served: false1 storage: true- 1
- Set to
false.
Switch the
version to a serving version if the version to be deprecated is currently thestorageversion. For example:storageversions: - name: v1alpha1 served: false storage: false1 - name: v1beta1 served: true storage: true2 NoteTo remove a specific version that is or was the
version from a CRD, that version must be removed from thestoragein the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.storedVersion- Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions: - name: v1beta1 served: true storage: true-
Ensure the referencing CRD version in the section of your CSV is updated accordingly if that version is removed from the CRD.
owned
5.7.8.4. CRD templates Copia collegamentoCollegamento copiato negli appunti!
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named
alm-examples
The annotation consists of a list of the kind, for example, the CRD name and the corresponding
metadata
spec
The following full example provides templates for
EtcdCluster
EtcdBackup
EtcdRestore
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
5.7.8.5. Hiding internal objects Copia collegamentoCollegamento copiato negli appunti!
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a
Replication
replication: true
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the
operators.operatorframework.io/internal-objects
Procedure
-
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or block of your CR, if applicable to your Operator.
spec Add the
annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:operators.operatorframework.io/internal-objectsInternal object annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]'1 ...- 1
- Set any internal CRDs as an array of strings.
5.7.8.6. Initializing required custom resources Copia collegamentoCollegamento copiato negli appunti!
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding
operatorframework.io/initialization-resource
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Procedure
Add the
annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of aoperatorframework.io/initialization-resourceresource and provides a full YAML definition:StorageClusterInitialization resource annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { "apiVersion": "ocs.openshift.io/v1", "kind": "StorageCluster", "metadata": { "name": "example-storagecluster" }, "spec": { "manageNodes": false, "monPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } }, "storageClassName": "gp2" } }, "storageDeviceSets": [ { "count": 3, "dataPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "1Ti" } }, "storageClassName": "gp2", "volumeMode": "Block" } }, "name": "example-deviceset", "placement": {}, "portable": true, "resources": {} } ] } } ...
5.7.9. Understanding your API services Copia collegamentoCollegamento copiato negli appunti!
As with CRDs, there are two types of API services that your Operator may use: owned and required.
5.7.9.1. Owned API services Copia collegamentoCollegamento copiato negli appunti!
When a CSV owns an API service, it is responsible for describing the deployment of the extension
api-server
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
| Field | Description | Required/optional |
|---|---|---|
|
| Group that the API service provides, for example
| Required |
|
| Version of the API service, for example
| Required |
|
| A kind that the API service is expected to provide. | Required |
|
| The plural name for the API service provided. | Required |
|
| Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the
| Required |
|
| A human readable version of your API service name, for example
| Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
|
| Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| Essentially the same as for owned CRDs. | Optional |
5.7.9.1.1. API service resource creation Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
-
Service pod selectors are copied from the CSV deployment matching the field of the API service description.
DeploymentName - A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.9.1.2. API service serving certificates Copia collegamentoCollegamento copiato negli appunti!
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated
Service
The certificate is stored as a type
kubernetes.io/tls
apiservice-cert
DeploymentName
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to
/apiserver.local.config/certificates
5.7.9.2. Required API services Copia collegamentoCollegamento copiato negli appunti!
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
| Field | Description | Required/optional |
|---|---|---|
|
| Group that the API service provides, for example
| Required |
|
| Version of the API service, for example
| Required |
|
| A kind that the API service is expected to provide. | Required |
|
| A human readable version of your API service name, for example
| Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
5.8. Working with bundle images Copia collegamentoCollegamento copiato negli appunti!
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
5.8.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.11+ installed
oc - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.8.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \1 -n <namespace> \2 <registry>/<user>/<bundle_image_name>:<tag>3 - 1
- The
run bundlecommand creates a valid file-based catalog and installs the Operator bundle on your cluster using OLM. - 2
- Optional: By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation. - 3
- If you do not specify an image, the command uses
quay.io/operator-framework/opm:latestas the default index image. If you specify an image, the command uses the bundle image itself as the index image.
ImportantAs of OpenShift Container Platform 4.11, the
command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.run bundleThis command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required resources, including RBAC.InstallPlan
5.8.3. Publishing a catalog containing a bundled Operator Copia collegamentoCollegamento copiato negli appunti!
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the
opm
opm
opm
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.11)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Run the following
command in your Operator project directory to build an index image containing your Operator bundle:make$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>where the
argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.CATALOG_IMGPush the built index image to a repository:
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>TipYou can use Operator SDK
commands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:make$ make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec>Alternatively, you can set the
field in yourIMAGE_TAG_BASEto an existing repository:MakefileIMAGE_TAG_BASE=quay.io/example/my-operatorYou can then use the following syntax to build and push images with automatically-generated names, such as
for the bundle image andquay.io/example/my-operator-bundle:v0.0.1for the index image:quay.io/example/my-operator-catalog:v0.0.1$ make bundle-build bundle-push catalog-build catalog-pushDefine a
object that references the index image you just generated, and then create the object by using theCatalogSourcecommand or web console:oc applyExample
CatalogSourceYAMLapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: default spec: displayName: My Test publisher: Company sourceType: grpc image: quay.io/example/memcached-catalog:v0.0.11 updateStrategy: registryPoll: interval: 10m- 1
- Set
imageto the image pull spec you used previously with theCATALOG_IMGargument.
Check the catalog source:
$ oc get catalogsourceExample output
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m
Verification
Install the Operator using your catalog:
Define an
object and create it by using theOperatorGroupcommand or web console:oc applyExample
OperatorGroupYAMLapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: default spec: targetNamespaces: - defaultDefine a
object and create it by using theSubscriptioncommand or web console:oc applyExample
SubscriptionYAMLapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: default spec: channel: "alpha" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: default startingCSV: memcached-operator.v0.0.1
Verify the installed Operator is running:
Check the Operator group:
$ oc get ogExample output
NAME AGE my-test 4h40mCheck the cluster service version (CSV):
$ oc get csvExample output
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 SucceededCheck the pods for the Operator:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m
5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The
run bundle-upgrade
Prerequisites
-
Operator installed with OLM either by using the subcommand or with traditional OLM installation
run bundle - A bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
subcommand or with traditional OLM installation.run bundleNoteIf the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
subcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).run bundle-upgradeFor example, you can use the following
subcommand for a Memcached Operator by specifying the earlier bundle image:run bundle$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1Example output
INFO[0006] Creating a File-Based Catalog of the bundle "quay.io/demo/memcached-operator:v0.0.1" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup "operator-sdk-og" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion ""my-project/memcached-operator.v0.0.1" to appear INFO[0026] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Pending INFO[0028] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Installing INFO[0059] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: Succeeded INFO[0059] OLM has successfully installed "memcached-operator.v0.0.1"Upgrade the installed Operator by specifying the bundle image for the later Operator version:
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2Example output
INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Pending INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"Clean up the installed Operators:
$ operator-sdk cleanup memcached-operator
5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions Copia collegamentoCollegamento copiato negli appunti!
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.
As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
APIRemovedInNextReleaseInUse- APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse- APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:olm.maxOpenShiftVersionImportantYou must use
annotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, cluster admins may uninstall your Operator and can upgrade the cluster version.olm.maxOpenShiftVersionExample CSV with
olm.maxOpenShiftVersionannotationapiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'1 - 1
- Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting
valueto4.9prevents cluster upgrades to OpenShift Container Platform versions later than 4.9 when this bundle is installed on a cluster.
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
NoteThis step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".
Set the
annotation in your project’scom.redhat.openshift.versionsfile:bundle/metadata/annotations.yamlExample
bundle/metadata/annotations.yamlfile with compatible versionscom.redhat.openshift.versions: "v4.7-v4.9"1 - 1
- Set to a range or single version.
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper
label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thecom.redhat.openshift.versionsfile:bundle.DockerfileExample
bundle.Dockerfilewith compatible versionsLABEL com.redhat.openshift.versions="<versions>"1 - 1
- Set to a range or single version, for example,
v4.7-v4.9. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
5.9. Complying with pod security admission Copia collegamentoCollegamento copiato negli appunti!
Pod security admission is an implementation of the Kubernetes pod security standards. Pod security admission restricts the behavior of pods. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
If your Operator project does not require escalated permissions to run, you can ensure your workloads run in namespaces set to the
restricted
- The allowed pod security admission level for the Operator’s namespace
- The allowed security context constraints (SCC) for the workload’s service account
For more information, see Understanding and managing pod security admission.
5.9.1. Security context constraint synchronization with pod security standards Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform includes Kubernetes pod security admission. Globally, the
privileged
restricted
In addition to the global pod security admission control configuration, a controller exists that applies pod security admission control
warn
audit
Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. You can enable pod security admission synchronization on other namespaces as necessary.
The controller examines
ServiceAccount
warn
audit
Namespace labeling is based on consideration of namespace-local service account privileges.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.
5.9.2. Ensuring Operator workloads run in namespaces set to the restricted pod security level Copia collegamentoCollegamento copiato negli appunti!
To ensure your Operator project can run on a wide variety of deployments and environments, configure the Operator’s workloads to run in namespaces set to the
restricted
You must leave the
runAsUser
Procedure
To configure Operator workloads to run in namespaces set to the
pod security level, edit your Operator’s namespace definition similar to the following examples:restrictedImportantIt is recommended that you set the seccomp profile in your Operator’s namespace definition. However, setting the seccomp profile is not supported in OpenShift Container Platform 4.10.
For Operator projects that must run in only OpenShift Container Platform 4.11 and later, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yamlfile... spec: securityContext: seccompProfile: type: RuntimeDefault1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL ...- 1
- By setting the seccomp profile type to
RuntimeDefault, the SCC defaults to the pod security profile of the namespace.
For Operator projects that must also run in OpenShift Container Platform 4.10, edit your Operator’s namespace definition similar to the following example:
Example
config/manager/manager.yamlfile... spec: securityContext:1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL ...- 1
- Leaving the seccomp profile type unset ensures your Operator project can run in OpenShift Container Platform 4.10.
5.9.3. Managing pod security admission for Operator workloads that require escalated permissions Copia collegamentoCollegamento copiato negli appunti!
If your Operator project requires escalated permissions to run, you must edit your Operator’s cluster service version (CSV).
Procedure
Set the security context configuration to the required permission level in your Operator’s CSV, similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfile with network administrator privileges... containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - "NET_ADMIN" ...Set the service account privileges that allow your Operator’s workloads to use the required security context constraints (SCC), similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfile... install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default ...Edit your Operator’s CSV description to explain why your Operator project requires escalated permissions similar to the following example:
Example
<operator_name>.clusterserviceversion.yamlfile... spec: apiservicedefinitions:{} ... description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.
5.10. Validating Operators using the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:
- Validate that your Operator project is free of syntax errors and packaged correctly
- Review suggestions about ways you can improve your Operator
5.10.1. About the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
While the Operator SDK
bundle validate
scorecard
The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OpenShift Container Platform. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.
Scorecard workflow
- Create all resources required by any related custom resources (CRs) and the Operator
- Create a proxy container in the deployment of the Operator to record calls to the API server and run tests
- Examine parameters in the CRs
The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.
scorecard command syntax
$ operator-sdk scorecard <bundle_dir_or_image> [flags]
The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.
For further information about the flags, run:
$ operator-sdk scorecard -h
5.10.2. Scorecard configuration Copia collegamentoCollegamento copiato negli appunti!
The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named
config.yaml
make bundle
bundle/
./bundle
...
└── tests
└── scorecard
└── config.yaml
Example scorecard configuration file
kind: Configuration
apiversion: scorecard.operatorframework.io/v1alpha3
metadata:
name: config
stages:
- parallel: true
tests:
- image: quay.io/operator-framework/scorecard-test:v1.22.2
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
- image: quay.io/operator-framework/scorecard-test:v1.22.2
entrypoint:
- scorecard-test
- olm-bundle-validation
labels:
suite: olm
test: olm-bundle-validation-test
The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:
| Configuration field | Description |
|---|---|
|
| Test container image name that implements a test |
|
| Command and arguments that are invoked in the test image to execute a test |
|
| Scorecard-defined or custom labels that select which tests to run |
5.10.3. Built-in scorecard tests Copia collegamentoCollegamento copiato negli appunti!
The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.
| Test | Description | Short name |
|---|---|---|
| Spec Block Exists | This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a
|
|
| Test | Description | Short name |
|---|---|---|
| Bundle Validation | This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
| Provided APIs Have Validation | This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each
|
|
| Owned CRDs Have Resources Listed | This test makes sure that the CRDs for each CR provided via the
|
|
| Spec Fields With Descriptors | This test verifies that every field in the CRs
|
|
| Status Fields With Descriptors | This test verifies that every field in the CRs
|
|
5.10.4. Running the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
A default set of Kustomize files are generated by the Operator SDK after running the
init
bundle/tests/scorecard/config.yaml
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
Generate or regenerate your bundle manifests and metadata for your Operator:
$ make bundleThis command automatically adds scorecard annotations to your bundle metadata, which is used by the
command to run tests.scorecardRun the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:
$ operator-sdk scorecard <bundle_dir_or_image>
5.10.5. Scorecard output Copia collegamentoCollegamento copiato negli appunti!
The
--output
scorecard
text
json
Example 5.15. Example JSON output snippet
{
"apiVersion": "scorecard.operatorframework.io/v1alpha3",
"kind": "TestList",
"items": [
{
"kind": "Test",
"apiVersion": "scorecard.operatorframework.io/v1alpha3",
"spec": {
"image": "quay.io/operator-framework/scorecard-test:v1.22.2",
"entrypoint": [
"scorecard-test",
"olm-bundle-validation"
],
"labels": {
"suite": "olm",
"test": "olm-bundle-validation-test"
}
},
"status": {
"results": [
{
"name": "olm-bundle-validation",
"log": "time=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found metadata directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Found annotations file\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test\n",
"state": "pass"
}
]
}
}
]
}
Example 5.16. Example text output snippet
--------------------------------------------------------------------------------
Image: quay.io/operator-framework/scorecard-test:v1.22.2
Entrypoint: [scorecard-test olm-bundle-validation]
Labels:
"suite":"olm"
"test":"olm-bundle-validation-test"
Results:
Name: olm-bundle-validation
State: pass
Log:
time="2020-07-15T03:19:02Z" level=debug msg="Found manifests directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=debug msg="Found metadata directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=debug msg="Getting mediaType info from manifests directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=info msg="Found annotations file" name=bundle-test
time="2020-07-15T03:19:02Z" level=info msg="Could not find optional dependencies file" name=bundle-test
The output format spec matches the Test type layout.
5.10.6. Selecting tests Copia collegamentoCollegamento copiato negli appunti!
Scorecard tests are selected by setting the
--selector
Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.
Procedure
To select a single test, for example
, specify the test by using thebasic-check-spec-testflag:--selector$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-testTo select a suite of tests, for example
, specify a label that is used by all of the OLM tests:olm$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olmTo select multiple tests, specify the test names by using the
flag using the following syntax:selector$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'
5.10.7. Enabling parallel testing Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable
parallel
By default, or when a stage explicitly sets
parallel
false
However, if tests are designed to be fully isolated, they can be parallelized.
Procedure
To run a set of isolated tests in parallel, include them in the same stage and set
toparallel:trueapiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.22.2 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.22.2 labels: suite: olm test: olm-bundle-validation-test- 1
- Enables parallel testing
All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.
5.10.8. Custom scorecard tests Copia collegamentoCollegamento copiato negli appunti!
The scorecard tool can run custom tests that follow these mandated conventions:
- Tests are implemented within a container image
- Tests accept an entrypoint which include a command and arguments
-
Tests produce scorecard output in JSON format with no extraneous logging in the test output
v1alpha3 -
Tests can obtain the bundle contents at a shared mount point of
/bundle - Tests can access the Kubernetes API using an in-cluster client connection
Writing custom tests in other programming languages is possible if the test image follows the above guidelines.
The following example shows of a custom test image written in Go:
Example 5.17. Example custom scorecard test
// Copyright 2020 The Operator-SDK Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"encoding/json"
"fmt"
"log"
"os"
scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
apimanifests "github.com/operator-framework/api/pkg/manifests"
)
// This is the custom scorecard test example binary
// As with the Redhat scorecard test image, the bundle that is under
// test is expected to be mounted so that tests can inspect the
// bundle contents as part of their test implementations.
// The actual test is to be run is named and that name is passed
// as an argument to this binary. This argument mechanism allows
// this binary to run various tests all from within a single
// test image.
const PodBundleRoot = "/bundle"
func main() {
entrypoint := os.Args[1:]
if len(entrypoint) == 0 {
log.Fatal("Test name argument is required")
}
// Read the pod's untar'd bundle from a well-known path.
cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot)
if err != nil {
log.Fatal(err.Error())
}
var result scapiv1alpha3.TestStatus
// Names of the custom tests which would be passed in the
// `operator-sdk` command.
switch entrypoint[0] {
case CustomTest1Name:
result = CustomTest1(cfg)
case CustomTest2Name:
result = CustomTest2(cfg)
default:
result = printValidTests()
}
// Convert scapiv1alpha3.TestResult to json.
prettyJSON, err := json.MarshalIndent(result, "", " ")
if err != nil {
log.Fatal("Failed to generate json", err)
}
fmt.Printf("%s\n", string(prettyJSON))
}
// printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are.
func printValidTests() scapiv1alpha3.TestStatus {
result := scapiv1alpha3.TestResult{}
result.State = scapiv1alpha3.FailState
result.Errors = make([]string, 0)
result.Suggestions = make([]string, 0)
str := fmt.Sprintf("Valid tests for this image include: %s %s",
CustomTest1Name,
CustomTest2Name)
result.Errors = append(result.Errors, str)
return scapiv1alpha3.TestStatus{
Results: []scapiv1alpha3.TestResult{result},
}
}
const (
CustomTest1Name = "customtest1"
CustomTest2Name = "customtest2"
)
// Define any operator specific custom tests here.
// CustomTest1 and CustomTest2 are example test functions. Relevant operator specific
// test logic is to be implemented in similarly.
func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
r := scapiv1alpha3.TestResult{}
r.Name = CustomTest1Name
r.State = scapiv1alpha3.PassState
r.Errors = make([]string, 0)
r.Suggestions = make([]string, 0)
almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
if almExamples == "" {
fmt.Println("no alm-examples in the bundle CSV")
}
return wrapResult(r)
}
func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
r := scapiv1alpha3.TestResult{}
r.Name = CustomTest2Name
r.State = scapiv1alpha3.PassState
r.Errors = make([]string, 0)
r.Suggestions = make([]string, 0)
almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
if almExamples == "" {
fmt.Println("no alm-examples in the bundle CSV")
}
return wrapResult(r)
}
func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus {
return scapiv1alpha3.TestStatus{
Results: []scapiv1alpha3.TestResult{r},
}
}
5.11. Validating Operator bundles Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can run the
bundle validate
5.11.1. About the bundle validate command Copia collegamentoCollegamento copiato negli appunti!
While the Operator SDK
scorecard
bundle validate
bundle validate command syntax
$ operator-sdk bundle validate <bundle_dir_or_image> <flags>
The
bundle validate
make bundle
Bundle images are pulled from a remote registry and built locally before they are validated. Local bundle directories must contain Operator metadata and manifests. The bundle metadata and manifests must have a structure similar to the following bundle layout:
Example bundle layout
./bundle
├── manifests
│ ├── cache.my.domain_memcacheds.yaml
│ └── memcached-operator.clusterserviceversion.yaml
└── metadata
└── annotations.yaml
Bundle tests pass validation and finish with an exit code of
0
Example output
INFO[0000] All validation tests have completed successfully
Tests fail validation and finish with an exit code of
1
Example output
ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD "cache.example.com/v1alpha1, Kind=Memcached" is present in bundle "" but not defined in CSV
Bundle tests that result in warnings can still pass validation with an exit code of
0
Example output
WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found
INFO[0000] All validation tests have completed successfully
For further information about the
bundle validate
$ operator-sdk bundle validate -h
5.11.2. Built-in bundle validate tests Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK ships with pre-defined validators arranged into suites. If you run the
bundle validate
You can run optional validators to test for issues such as OperatorHub compatibility or deprecated Kubernetes APIs. Optional validators always run in addition to the default test.
bundle validate command syntax for optional test suites
$ operator-sdk bundle validate <bundle_dir_or_image>
--select-optional <test_label>
| Name | Description | Label |
|---|---|---|
| Operator Framework | This validator tests an Operator bundle against the entire suite of validators provided by the Operator Framework. |
|
| OperatorHub | This validator tests an Operator bundle for compatibility with OperatorHub. |
|
| Good Practices | This validator tests whether an Operator bundle complies with good practices as defined by the Operator Framework. It checks for issues, such as an empty CRD description or unsupported Operator Lifecycle Manager (OLM) resources. |
|
5.11.3. Running the bundle validate command Copia collegamentoCollegamento copiato negli appunti!
The default validator runs a test every time you enter the
bundle validate
--select-optional
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
If you want to run the default validator against a local bundle directory, enter the following command from your Operator project directory:
$ operator-sdk bundle validate ./bundleIf you want to run the default validator against a remote Operator bundle image, enter the following command:
$ operator-sdk bundle validate \ <bundle_registry>/<bundle_image_name>:<tag>where:
- <bundle_registry>
-
Specifies the registry where the bundle is hosted, such as
quay.io/example. - <bundle_image_name>
-
Specifies the name of the bundle image, such as
memcached-operator. - <tag>
Specifies the tag of the bundle image, such as
.v1.22.2NoteIf you want to validate an Operator bundle image, you must host your image in a remote registry. The Operator SDK pulls the image and builds it locally before running tests. The
command does not support testing local bundle images.bundle validate
If you want to run an additional validator against an Operator bundle, enter the following command:
$ operator-sdk bundle validate \ <bundle_dir_or_image> \ --select-optional <test_label>where:
- <bundle_dir_or_image>
-
Specifies the local bundle directory or remote bundle image, such as
~/projects/memcachedorquay.io/example/memcached-operator:v1.22. - <test_label>
Specifies the name of the validator you want to run, such as
.name=good-practicesExample output
ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD "k8sevents.k8s.k8sevent.com" has an empty description
5.12. High-availability or single-node cluster detection and support Copia collegamentoCollegamento copiato negli appunti!
An OpenShift Container Platform cluster can be configured in high-availability (HA) mode, which uses multiple nodes, or in non-HA mode, which uses a single node. A single-node cluster, also known as single-node OpenShift, is likely to have more conservative resource constraints. Therefore, it is important that Operators installed on a single-node cluster can adjust accordingly and still run well.
By accessing the cluster high-availability mode API provided in OpenShift Container Platform, Operator authors can use the Operator SDK to enable their Operator to detect a cluster’s infrastructure topology, either HA or non-HA mode. Custom Operator logic can be developed that uses the detected cluster topology to automatically switch the resource requirements, both for the Operator and for any Operands or workloads it manages, to a profile that best fits the topology.
5.12.1. About the cluster high-availability mode API Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform provides a cluster high-availability mode API that can be used by Operators to help detect infrastructure topology. The Infrastructure API holds cluster-wide information regarding infrastructure. Operators managed by Operator Lifecycle Manager (OLM) can use the Infrastructure API if they need to configure an Operand or managed workload differently based on the high-availability mode.
In the Infrastructure API, the
infrastructureTopology
role
master
controlPlaneTopology
The default setting for either status is
HighlyAvailable
SingleReplica
The OpenShift Container Platform installer sets the
controlPlaneTopology
infrastructureTopology
-
When the control plane replica count is less than 3, the status is set to
controlPlaneTopology. Otherwise, it is set toSingleReplica.HighlyAvailable -
When the worker replica count is 0, the control plane nodes are also configured as workers. Therefore, the status will be the same as the
infrastructureTopologystatus.controlPlaneTopology -
When the worker replica count is 1, the is set to
infrastructureTopology. Otherwise, it is set toSingleReplica.HighlyAvailable
5.12.2. Example API usage in Operator projects Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can update your Operator project to access the Infrastructure API by using normal Kubernetes constructs and the
controller-runtime
controller-runtime library example
// Simple query
nn := types.NamespacedName{
Name: "cluster",
}
infraConfig := &configv1.Infrastructure{}
err = crClient.Get(context.Background(), nn, infraConfig)
if err != nil {
return err
}
fmt.Printf("using crclient: %v\n", infraConfig.Status.ControlPlaneTopology)
fmt.Printf("using crclient: %v\n", infraConfig.Status.InfrastructureTopology)
Kubernetes constructs example
operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second)
infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister()
infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), "cluster", metav1.GetOptions{})
if err != nil {
return err
}
// fmt.Printf("%v\n", infraConfig)
fmt.Printf("%v\n", infraConfig.Status.ControlPlaneTopology)
fmt.Printf("%v\n", infraConfig.Status.InfrastructureTopology)
5.13. Configuring built-in monitoring with Prometheus Copia collegamentoCollegamento copiato negli appunti!
This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for authors of Go-based and Ansible-based Operators.
5.13.1. Prometheus Operator support Copia collegamentoCollegamento copiato negli appunti!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
5.13.2. Exposing custom metrics for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can publish custom metrics by using the global Prometheus registry from the
controller-runtime/pkg/metrics
Prerequisites
- Go-based Operator generated using the Operator SDK
- Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters
Procedure
In your Operator SDK project, uncomment the following line in the
file:config/default/kustomization.yaml../prometheusCreate a custom controller class to publish additional metrics from the Operator. The following example declares the
andwidgetscollectors as global variables, and then registers them with thewidgetFailuresfunction in the controller’s package:init()Example 5.18.
controllers/memcached_controller_test_metrics.gofilepackage controllers import ( "github.com/prometheus/client_golang/prometheus" "sigs.k8s.io/controller-runtime/pkg/metrics" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: "widgets_total", Help: "Number of widgets processed", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: "widget_failures_total", Help: "Number of failed widgets", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }Record to these collectors from any part of the reconcile loop in the
controller class, which determines the business logic for the metric:mainExample 5.19.
controllers/memcached_controller.gofilefunc (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { ... ... // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }Build and push the Operator:
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Create role and role binding definitions to allow the service monitor of the Operator to be scraped by the Prometheus instance of the OpenShift Container Platform cluster.
Roles must be assigned so that service accounts have the permissions to scrape the metrics of the namespace:
Example 5.20.
config/prometheus/role.yamlroleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: <operator_namespace> rules: - apiGroups: - "" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watchExample 5.21.
config/prometheus/rolebinding.yamlrole bindingapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoringApply the roles and role bindings for the deployed Operator:
$ oc apply -f config/prometheus/role.yaml$ oc apply -f config/prometheus/rolebinding.yamlSet the labels for the namespace that you want to scrape, which enables OpenShift cluster monitoring for that namespace:
$ oc label namespace <operator_namespace> openshift.io/cluster-monitoring="true"
Verification
-
Query and view the metrics in the OpenShift Container Platform web console. You can use the names that were set in the custom controller class, for example and
widgets_total.widget_failures_total
5.13.3. Exposing custom metrics for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
As an Operator author creating Ansible-based Operators, you can use the Operator SDK’s
osdk_metrics
Prerequisites
- Ansible-based Operator generated using the Operator SDK
- Prometheus Operator, which is deployed by default on OpenShift Container Platform clusters
Procedure
Generate an Ansible-based Operator. This example uses a
domain:testmetrics.com$ operator-sdk init \ --plugins=ansible \ --domain=testmetrics.comCreate a
API. This example uses ametricsnamedkind:Testmetrics$ operator-sdk create api \ --group metrics \ --version v1 \ --kind Testmetrics \ --generate-roleEdit the
file and use theroles/testmetrics/tasks/main.ymlmodule to create custom metrics for your Operator project:osdk_metricsExample 5.22. Example
roles/testmetrics/tasks/main.ymlfile--- # tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: "{{size}}" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: "docker.io/memcached:1.4.36-alpine" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2
Verification
Run your Operator on a cluster. For example, to use the "run as a deployment" method:
Build the Operator image and push it to a registry:
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Install the Operator on a cluster:
$ make installDeploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a
custom resource (CR):TestmetricsDefine the CR spec:
Example 5.23. Example
config/samples/metrics_v1_testmetrics.yamlfileapiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1Create the object:
$ oc create -f config/samples/metrics_v1_testmetrics.yaml
Get the pod details:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147mGet the endpoint details:
$ oc get epExample output
NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150mRequest a custom metrics token:
$ token=`oc create token prometheus-k8s -n openshift-monitoring`Check the metrics values:
Check the
value:my_counter_metric$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep my_counterExample output
HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2Check the
value:my_gauge_metric$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep gaugeExample output
HELP my_gauge_metric Create my gauge and set it to 2.Check the
andmy_histogram_metricvalues:my_summary_metric$ oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H "Authoriza tion: Bearer $token" 'https://10.129.2.70:8443/metrics' | grep ObserveExample output
HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary
5.14. Configuring leader election Copia collegamentoCollegamento copiato negli appunti!
During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own trade-off:
- Leader-for-life
-
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, the
pod-eviction-timeoutdictates long how it takes for the leader pod to be deleted from the node and step down, with a default of5m. See the Leader-for-life Go documentation for more. - Leader-with-lease
- The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.
By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.
5.14.1. Operator leader election examples Copia collegamentoCollegamento copiato negli appunti!
The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.
5.14.1.1. Leader-for-life election Copia collegamentoCollegamento copiato negli appunti!
With the Leader-for-life election implementation, a call to
leader.Become()
memcached-operator-lock
import (
...
"github.com/operator-framework/operator-sdk/pkg/leader"
)
func main() {
...
err = leader.Become(context.TODO(), "memcached-operator-lock")
if err != nil {
log.Error(err, "Failed to retry for leader lock")
os.Exit(1)
}
...
}
If the Operator is not running inside a cluster,
leader.Become()
5.14.1.2. Leader-with-lease election Copia collegamentoCollegamento copiato negli appunti!
The Leader-with-lease implementation can be enabled using the Manager Options for leader election:
import (
...
"sigs.k8s.io/controller-runtime/pkg/manager"
)
func main() {
...
opts := manager.Options{
...
LeaderElection: true,
LeaderElectionID: "memcached-operator-lock"
}
mgr, err := manager.New(cfg, opts)
...
}
When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator to create the config map for leader election. You can override this namespace by setting the
LeaderElectionNamespace
5.15. Object pruning utility for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
The
operator-lib
5.15.1. About the operator-lib pruning utility Copia collegamentoCollegamento copiato negli appunti!
Objects, such as jobs or pods, are created as a normal part of the Operator life cycle. If the cluster administrator or the Operator does not remove these object, they can stay in the cluster and consume resources.
Previously, the following options were available for pruning unnecessary objects:
- Operator authors had to create a unique pruning solution for their Operators.
- Cluster administrators had to clean up objects on their own.
The
operator-lib
0.9.0
operator-lib library as part of the Operator Framework.
5.15.2. Pruning utility configuration Copia collegamentoCollegamento copiato negli appunti!
The
operator-lib
Example configuration
cfg = Config{
log: logf.Log.WithName("prune"),
DryRun: false,
Clientset: client,
LabelSelector: "app=<operator_name>",
Resources: []schema.GroupVersionKind{
{Group: "", Version: "", Kind: PodKind},
},
Namespaces: []string{"default"},
Strategy: StrategyConfig{
Mode: MaxCountStrategy,
MaxCountSetting: 1,
},
PreDeleteHook: myhook,
}
The pruning utility configuration file defines pruning actions by using the following fields:
| Configuration field | Description |
|---|---|
|
| Logger used to handle library log messages. |
|
| Boolean that determines whether resources should be removed. If set to
|
|
| Client-go Kubernetes ClientSet used for Kubernetes API calls. |
|
| Kubernetes label selector expression used to find resources to prune. |
|
| Kubernetes resource kinds.
|
|
| List of Kubernetes namespaces to search for resources. |
|
| Pruning strategy to run. |
|
|
|
|
| Integer value for
|
|
| Go
|
|
| Go map of values that can be passed into a custom strategy function. |
|
| Optional: Go function to call before pruning a resource. |
|
| Optional: Go function that implements a custom pruning strategy |
Pruning execution
You can call the pruning action by running the execute function on the pruning configuration.
err := cfg.Execute(ctx)
You can also call a pruning action by using a cron package or by calling the pruning utility with a triggering event.
5.16. Migrating package manifest projects to bundle format Copia collegamentoCollegamento copiato negli appunti!
Support for the legacy package manifest format for Operators is removed in OpenShift Container Platform 4.8 and later. If you have an Operator project that was initially created using the package manifest format, you can use the Operator SDK to migrate the project to the bundle format. The bundle format is the preferred packaging format for Operator Lifecycle Manager (OLM) starting in OpenShift Container Platform 4.6.
5.16.1. About packaging format migration Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK
pkgman-to-bundle
For example, consider the following
packagemanifests/
Example package manifest format layout
packagemanifests/
└── etcd
├── 0.0.1
│ ├── etcdcluster.crd.yaml
│ └── etcdoperator.clusterserviceversion.yaml
├── 0.0.2
│ ├── etcdbackup.crd.yaml
│ ├── etcdcluster.crd.yaml
│ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
│ └── etcdrestore.crd.yaml
└── etcd.package.yaml
After running the migration, the following bundles are generated in the
bundle/
Example bundle format layout
bundle/
├── bundle-0.0.1
│ ├── bundle.Dockerfile
│ ├── manifests
│ │ ├── etcdcluster.crd.yaml
│ │ ├── etcdoperator.clusterserviceversion.yaml
│ ├── metadata
│ │ └── annotations.yaml
│ └── tests
│ └── scorecard
│ └── config.yaml
└── bundle-0.0.2
├── bundle.Dockerfile
├── manifests
│ ├── etcdbackup.crd.yaml
│ ├── etcdcluster.crd.yaml
│ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
│ ├── etcdrestore.crd.yaml
├── metadata
│ └── annotations.yaml
└── tests
└── scorecard
└── config.yaml
Based on this generated layout, bundle images for both of the bundles are also built with the following names:
-
quay.io/example/etcd:0.0.1 -
quay.io/example/etcd:0.0.2
5.16.2. Migrating a package manifest project to bundle format Copia collegamentoCollegamento copiato negli appunti!
Operator authors can use the Operator SDK to migrate a package manifest format Operator project to a bundle format project.
Prerequisites
- Operator SDK CLI installed
- Operator project initially generated using the Operator SDK in package manifest format
Procedure
Use the Operator SDK to migrate your package manifest project to the bundle format and generate bundle images:
$ operator-sdk pkgman-to-bundle <package_manifests_dir> \1 [--output-dir <directory>] \2 --image-tag-base <image_name_base>3 - 1
- Specify the location of the package manifests directory for the project, such as
packagemanifests/ormanifests/. - 2
- Optional: By default, the generated bundles are written locally to disk to the
bundle/directory. You can use the--output-dirflag to specify an alternative location. - 3
- Set the
--image-tag-baseflag to provide the base of the image name, such asquay.io/example/etcd, that will be used for the bundles. Provide the name without a tag, because the tag for the images will be set according to the bundle version. For example, the full bundle image names are generated in the format<image_name_base>:<bundle_version>.
Verification
Verify that the generated bundle image runs successfully:
$ operator-sdk run bundle <bundle_image_name>:<tag>Example output
INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup "operator-sdk-og" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to appear INFO[0048] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Pending INFO[0049] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Installing INFO[0064] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Succeeded INFO[0065] OLM has successfully installed "etcdoperator.v0.9.4"
5.17. Operator SDK CLI reference Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier.
Operator SDK CLI syntax
$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
5.17.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk bundle
5.17.1.1. validate Copia collegamentoCollegamento copiato negli appunti!
The
bundle validate
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are
|
|
| List all optional validators available. When set, no validators are run. |
|
| Label selector to select optional validators to run. When run with the
|
5.17.2. cleanup Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk cleanup
run
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Time to wait for the command to complete before failing. The default value is
|
5.17.3. completion Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk completion
| Subcommand | Description |
|---|---|
|
| Generate bash completions. |
|
| Generate zsh completions. |
| Flag | Description |
|---|---|
|
| Usage help output. |
For example:
$ operator-sdk completion bash
Example output
# bash completion for operator-sdk -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh
5.17.4. create Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk create
5.17.4.1. api Copia collegamentoCollegamento copiato negli appunti!
The
create api
init
| Flag | Description |
|---|---|
|
| Help output for the
|
5.17.5. generate Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk generate
5.17.5.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The
generate bundle
bundle.Dockerfile
Typically, you run the
generate kustomize manifests
generate bundle
make bundle
| Flag | Description |
|---|---|
|
| Comma-separated list of channels to which the bundle belongs. The default value is
|
|
| Root directory for
|
|
| The default channel for the bundle. |
|
| Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the
|
|
| Help for
|
|
| Directory from which to read an existing bundle. This directory is the parent of your bundle
|
|
| Directory containing Kustomize bases and a
|
|
| Generate bundle manifests. |
|
| Generate bundle metadata and Dockerfile. |
|
| Directory to write the bundle to. |
|
| Overwrite the bundle metadata and Dockerfile if they exist. The default value is
|
|
| Package name for the bundle. |
|
| Run in quiet mode. |
|
| Write bundle manifest to standard out. |
|
| Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. |
5.17.5.2. kustomize Copia collegamentoCollegamento copiato negli appunti!
The
generate kustomize
5.17.5.2.1. manifests Copia collegamentoCollegamento copiato negli appunti!
The
generate kustomize manifests
kustomization.yaml
config/manifests
--interactive=false
| Flag | Description |
|---|---|
|
| Root directory for API type definitions. |
|
| Help for
|
|
| Directory containing existing Kustomize files. |
|
| When set to
|
|
| Directory where to write Kustomize files. |
|
| Package name. |
|
| Run in quiet mode. |
5.17.6. init Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk init
This command writes the following files:
- Boilerplate license file
-
file with the domain and repository
PROJECT -
to build the project
Makefile -
file with project dependencies
go.mod -
file for customizing manifests
kustomization.yaml - Patch file for customizing images for manager manifests
- Patch file for enabling Prometheus metrics
-
file to run
main.go
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Name and optionally version of the plugin to initialize the project with. Available plugins are
|
|
| Project version. Available values are
|
5.17.7. run Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk run
5.17.7.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The
run bundle
| Flag | Description |
|---|---|
|
| Index image in which to inject a bundle. The default image is
|
|
| Install mode supported by the cluster service version (CSV) of the Operator, for example
|
|
| Install timeout. The default value is
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Help output for the
|
5.17.7.2. bundle-upgrade Copia collegamentoCollegamento copiato negli appunti!
The
run bundle-upgrade
| Flag | Description |
|---|---|
|
| Upgrade timeout. The default value is
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Help output for the
|
5.17.8. scorecard Copia collegamentoCollegamento copiato negli appunti!
The
operator-sdk scorecard
| Flag | Description |
|---|---|
|
| Path to scorecard configuration file. The default path is
|
|
| Help output for the
|
|
| Path to
|
|
| List which tests are available to run. |
|
| Namespace in which to run the test images. |
|
| Output format for results. Available values are
|
|
| Label selector to determine which tests are run. |
|
| Service account to use for tests. The default value is
|
|
| Disable resource cleanup after tests are run. |
|
| Seconds to wait for tests to complete, for example
|