Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Developing Operators
5.1. About the Operator SDK Copiar enlaceEnlace copiado en el portapapeles!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.
Why use the Operator SDK?
The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.
The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
- Extensions to cover common Operator use cases
- Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0 or later.
5.1.1. What are Operators? Copiar enlaceEnlace copiado en el portapapeles!
For an overview about basic Operator concepts and terminology, see Understanding Operators.
5.1.2. Development workflow Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK provides the following workflow to develop a new Operator:
- Create an Operator project by using the Operator SDK command-line interface (CLI).
- Define new resource APIs by adding custom resource definitions (CRDs).
- Specify resources to watch by using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 5.1. Operator SDK workflow
At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
5.2. Installing the Operator SDK CLI Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0.
5.2.1. Installing the Operator SDK CLI Copiar enlaceEnlace copiado en el portapapeles!
You can install the OpenShift SDK CLI tool on Linux.
Prerequisites
- Go v1.16+
-
v17.03+,
dockerv1.9.3+, orpodmanv1.7+buildah
Procedure
- Navigate to the OpenShift mirror site.
-
From the directory, download the latest version of the tarball for Linux.
4.8.4 Unpack the archive:
$ tar xvf operator-sdk-v1.8.0-ocp-linux-x86_64.tar.gzMake the file executable:
$ chmod +x operator-sdkMove the extracted
binary to a directory that is on youroperator-sdk.PATHTipTo check your
:PATH$ echo $PATH$ sudo mv ./operator-sdk /usr/local/bin/operator-sdk
Verification
After you install the Operator SDK CLI, verify that it is available:
$ operator-sdk versionExample output
operator-sdk version: "v1.8.0-ocp", ...
5.3. Upgrading projects for newer Operator SDK versions Copiar enlaceEnlace copiado en el portapapeles!
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0. If you already have the v1.3.0 CLI installed on your workstation, you can upgrade the CLI to v1.8.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK v1.8.0, upgrade steps are required for the associated breaking changes introduced since v1.3.0. You must perform the upgrade steps manually in any of your Operator projects that were previously created or maintained with v1.3.0.
5.3.1. Upgrading projects for Operator SDK v1.8.0 Copiar enlaceEnlace copiado en el portapapeles!
The following upgrade steps must be performed to upgrade an existing Operator project for compatibility with v1.8.0.
Prerequisites
- Operator SDK v1.8.0 installed
- Operator project that was previously created or maintained with Operator SDK v1.3.0
Procedure
Make the following changes to your
file:PROJECTUpdate the
filePROJECTobject to usepluginsandmanifestsobjects.scorecardThe
andmanifestsplug-ins that create Operator Lifecycle Manager (OLM) and scorecard manifests now have plug-in objects for runningscorecardsubcommands to create related files.createFor Go-based Operator projects, an existing Go-based plug-in configuration object is already present. While the old configuration is still supported, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
Old configuration
version: 3-alpha ... plugins: go.sdk.operatorframework.io/v2-alpha: {}New configuration
version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}Optional: For Ansible- and Helm-based Operator projects, the plug-in configuration object previously did not exist. While you are not required to add the plug-in configuration objects, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}
The
config versionPROJECTmust be upgraded to3-alpha. The3key in yourversionfile represents thePROJECTconfig version:PROJECTOld
PROJECTfileversion: 3-alpha resources: - crdVersion: v1 ...Version
has been stabilized as version 3 and contains a set of config fields sufficient to fully describe a project. While this change is not technically breaking because the spec at that version was alpha, it was used by default in3-alphacommands, so it should be marked as breaking and have a convenient upgrade path.operator-sdkRun the
command to convert most of youralpha config-3alpha-to-3file from versionPROJECTto3-alpha:3$ operator-sdk alpha config-3alpha-to-3Example output
Your PROJECT config file has been converted from version 3-alpha to 3. Please make sure all config data is correct.The command will also output comments with directions where automatic conversion is not possible.
Verify the change:
New
PROJECTfileversion: "3" resources: - api: crdVersion: v1 ...
Make the following changes to your
file:config/manager/manager.yamlFor Ansible- and Helm-based Operator projects, add liveness and readiness probes.
New projects built with the Operator SDK have the probes configured by default. The endpoints
and/healthzare available now in the provided image base. You can update your existing projects to use the probes by updating the/readyzto use the latest base image, then add the following to theDockerfilecontainer in themanagerfile:config/manager/manager.yamlExample 5.1. Configuration for Ansible-based Operator projects
livenessProbe: httpGet: path: /healthz port: 6789 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: httpGet: path: /readyz port: 6789 initialDelaySeconds: 5 periodSeconds: 10Example 5.2. Configuration for Helm-based Operator projects
livenessProbe: httpGet: path: /healthz port: 8081 initialDelaySeconds: 15 periodSeconds: 20 readinessProbe: httpGet: path: /readyz port: 8081 initialDelaySeconds: 5 periodSeconds: 10For Ansible- and Helm-based Operator projects, add security contexts to your manager’s deployment.
In the
file, add the following security contexts:config/manager/manager.yamlExample 5.3.
config/manager/manager.yamlfilespec: ... template: ... spec: securityContext: runAsNonRoot: true containers: - name: manager securityContext: allowPrivilegeEscalation: false
Make the following changes to your
:MakefileFor Ansible- and Helm-based Operator projects, update the
andhelm-operatorURLs in theansible-operator:MakefileFor Ansible-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator_$(OS)_$(ARCH)For Helm-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator_$(OS)_$(ARCH)
For Ansible- and Helm-based Operator projects, update the
,helm-operator, andansible-operatorrules in thekustomize. These rules download a local binary but do not use it if a global binary is present:MakefileExample 5.4.
Makefilediff for Ansible-based Operator projectsPATH := $(PATH):$(PWD)/bin SHELL := env PATH=$(PATH) /bin/sh -OS := $(shell uname -s | tr '[:upper:]' '[:lower:]') -ARCH := $(shell uname -m | sed 's/x86_64/amd64/') +OS = $(shell uname -s | tr '[:upper:]' '[:lower:]') +ARCH = $(shell uname -m | sed 's/x86_64/amd64/') +OSOPER = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/') +ARCHOPER = $(shell uname -m ) -# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist. -.PHONY: kustomize -KUSTOMIZE = $(shell pwd)/bin/kustomize kustomize: -ifeq (,$(wildcard $(KUSTOMIZE))) -ifeq (,$(shell which kustomize 2>/dev/null)) +ifeq (, $(shell which kustomize 2>/dev/null)) @{ \ set -e ;\ - mkdir -p $(dir $(KUSTOMIZE)) ;\ - curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \ - tar xzf - -C bin/ ;\ + mkdir -p bin ;\ + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\ } +KUSTOMIZE=$(realpath ./bin/kustomize) else -KUSTOMIZE = $(shell which kustomize) -endif +KUSTOMIZE=$(shell which kustomize) endif -# Download ansible-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist. -.PHONY: ansible-operator -ANSIBLE_OPERATOR = $(shell pwd)/bin/ansible-operator ansible-operator: -ifeq (,$(wildcard $(ANSIBLE_OPERATOR))) -ifeq (,$(shell which ansible-operator 2>/dev/null)) +ifeq (, $(shell which ansible-operator 2>/dev/null)) @{ \ set -e ;\ - mkdir -p $(dir $(ANSIBLE_OPERATOR)) ;\ - curl -sSLo $(ANSIBLE_OPERATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator_$(OS)_$(ARCH) ;\ - chmod +x $(ANSIBLE_OPERATOR) ;\ + mkdir -p bin ;\ + curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ;\ + mv ansible-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ./bin/ansible-operator ;\ + chmod +x ./bin/ansible-operator ;\ } +ANSIBLE_OPERATOR=$(realpath ./bin/ansible-operator) else -ANSIBLE_OPERATOR = $(shell which ansible-operator) -endif +ANSIBLE_OPERATOR=$(shell which ansible-operator) endifExample 5.5.
Makefilediff for Helm-based Operator projectsPATH := $(PATH):$(PWD)/bin SHELL := env PATH=$(PATH) /bin/sh -OS := $(shell uname -s | tr '[:upper:]' '[:lower:]') -ARCH := $(shell uname -m | sed 's/x86_64/amd64/') +OS = $(shell uname -s | tr '[:upper:]' '[:lower:]') +ARCH = $(shell uname -m | sed 's/x86_64/amd64/') +OSOPER = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/') +ARCHOPER = $(shell uname -m ) -# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist. -.PHONY: kustomize -KUSTOMIZE = $(shell pwd)/bin/kustomize kustomize: -ifeq (,$(wildcard $(KUSTOMIZE))) -ifeq (,$(shell which kustomize 2>/dev/null)) +ifeq (, $(shell which kustomize 2>/dev/null)) @{ \ set -e ;\ - mkdir -p $(dir $(KUSTOMIZE)) ;\ - curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \ - tar xzf - -C bin/ ;\ + mkdir -p bin ;\ + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\ } +KUSTOMIZE=$(realpath ./bin/kustomize) else -KUSTOMIZE = $(shell which kustomize) -endif +KUSTOMIZE=$(shell which kustomize) endif -# Download helm-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist. -.PHONY: helm-operator -HELM_OPERATOR = $(shell pwd)/bin/helm-operator helm-operator: -ifeq (,$(wildcard $(HELM_OPERATOR))) -ifeq (,$(shell which helm-operator 2>/dev/null)) +ifeq (, $(shell which helm-operator 2>/dev/null)) @{ \ set -e ;\ - mkdir -p $(dir $(HELM_OPERATOR)) ;\ - curl -sSLo $(HELM_OPERATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator_$(OS)_$(ARCH) ;\ - chmod +x $(HELM_OPERATOR) ;\ + mkdir -p bin ;\ + curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ;\ + mv helm-operator-v1.8.0-$(ARCHOPER)-$(OSOPER) ./bin/helm-operator ;\ + chmod +x ./bin/helm-operator ;\ } +HELM_OPERATOR=$(realpath ./bin/helm-operator) else -HELM_OPERATOR = $(shell which helm-operator) -endif +HELM_OPERATOR=$(shell which helm-operator) endifMove the positional directory argument
in the.target formake.docker-buildThe directory argument
in the.target was moved to the last positional argument to align withdocker-buildCLI expectations, which makes substitution cleaner:podmanOld target
docker-build: docker build . -t ${IMG}New target
docker-build: docker build -t ${IMG} .You can make this change by running the following command:
$ sed -i 's/docker build . -t ${IMG}/docker build -t ${IMG} ./' $(git grep -l 'docker.*build \. ')For Ansible- and Helm-based Operator projects, add a
target to thehelp.MakefileAnsible- and Helm-based projects now provide
target in thehelpby default, similar to aMakefileflag. You can manually add this target to your--helpusing the following lines:MakefileExample 5.6.
helptarget##@ General # The help target prints out all targets with their descriptions organized # beneath their categories. The categories are represented by '##@' and the # target descriptions by '##'. The awk commands is responsible for reading the # entire set of makefiles included in this invocation, looking for lines of the # file as xyz: ## something, and then pretty-format the target and help. Then, # if there's a line with ##@ something, that gets pretty-printed as a category. # More info on the usage of ANSI control characters for terminal formatting: # https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters # More info on the awk command: # http://linuxcommand.org/lc3_adv_awk.php help: ## Display this help. @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)Add
andopmtargets. You can use these targets to create your own catalogs for your Operator or add your Operator bundles to an existing catalog:catalog-buildAdd the targets to your
by adding the following lines:MakefileExample 5.7.
opmandcatalog-buildtargets.PHONY: opm OPM = ./bin/opm opm: ifeq (,$(wildcard $(OPM))) ifeq (,$(shell which opm 2>/dev/null)) @{ \ set -e ;\ mkdir -p $(dir $(OPM)) ;\ curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.15.1/$(OS)-$(ARCH)-opm ;\ chmod +x $(OPM) ;\ } else OPM = $(shell which opm) endif endif BUNDLE_IMGS ?= $(BUNDLE_IMG) CATALOG_IMG ?= $(IMAGE_TAG_BASE)-catalog:v$(VERSION) ifneq ($(origin CATALOG_BASE_IMG), undefined) FROM_INDEX_OPT := --from-index $(CATALOG_BASE_IMG) endif .PHONY: catalog-build catalog-build: opm $(OPM) index add --container-tool docker --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLE_IMGS) $(FROM_INDEX_OPT) .PHONY: catalog-push catalog-push: ## Push the catalog image. $(MAKE) docker-push IMG=$(CATALOG_IMG)If you are updating a Go-based Operator project, also add the following
variables:MakefileExample 5.8.
MakefilevariablesOS = $(shell go env GOOS) ARCH = $(shell go env GOARCH)
For Go-based Operator projects, set the
variable in yourSHELLto the systemMakefilebinary.bashImporting the
script requiressetup-envtest.sh, so thebashvariable must be set toSHELLwith error options:bashExample 5.9.
Makefilediffelse GOBIN=$(shell go env GOBIN) endif +# Setting SHELL to bash allows bash commands to be executed by recipes. +# This is a requirement for 'setup-envtest.sh' in the test target. +# Options are set to exit when a recipe line exits non-zero or a piped command fails. +SHELL = /usr/bin/env bash -o pipefail +.SHELLFLAGS = -ec + all: build
For Go-based Operator projects, upgrade
to v0.8.3 and Kubernetes dependencies to v0.20.2 by changing the following entries in yourcontroller-runtimefile, then rebuild your project:go.modExample 5.10.
go.modfile... k8s.io/api v0.20.2 k8s.io/apimachinery v0.20.2 k8s.io/client-go v0.20.2 sigs.k8s.io/controller-runtime v0.8.3Add a
service account to your project. A non-default service accountsystem:controller-manageris now generated by thecontroller-managercommand to improve security for Operators installed in shared namespaces. To add this service account to your existing project, follow these steps:operator-sdk initCreate the
definition in a file:ServiceAccountExample 5.11.
config/rbac/service_account.yamlfileapiVersion: v1 kind: ServiceAccount metadata: name: controller-manager namespace: systemAdd the service account to the list of RBAC resources:
$ echo "- service_account.yaml" >> config/rbac/kustomization.yamlUpdate all
andRoleBindingobjects that reference the Operator’s service account:ClusterRoleBinding$ find config/rbac -name *_binding.yaml -exec sed -i -E 's/ name: default/ name: controller-manager/g' {} \;Add the service account name to the manager deployment’s
field:spec.template.spec.serviceAccountName$ sed -i -E 's/([ ]+)(terminationGracePeriodSeconds:)/\1serviceAccountName: controller-manager\n\1\2/g' config/manager/manager.yamlVerify the changes look like the following diffs:
Example 5.12.
config/manager/manager.yamlfile diff... requests: cpu: 100m memory: 20Mi + serviceAccountName: controller-manager terminationGracePeriodSeconds: 10Example 5.13.
config/rbac/auth_proxy_role_binding.yamlfile diff... name: proxy-role subjects: - kind: ServiceAccount - name: default + name: controller-manager namespace: systemExample 5.14.
config/rbac/kustomization.yamlfile diffresources: +- service_account.yaml - role.yaml - role_binding.yaml - leader_election_role.yamlExample 5.15.
config/rbac/leader_election_role_binding.yamlfile diff... name: leader-election-role subjects: - kind: ServiceAccount - name: default + name: controller-manager namespace: systemExample 5.16.
config/rbac/role_binding.yamlfile diff... name: manager-role subjects: - kind: ServiceAccount - name: default + name: controller-manager namespace: systemExample 5.17.
config/rbac/service_account.yamlfile diff+apiVersion: v1 +kind: ServiceAccount +metadata: + name: controller-manager + namespace: system
Make the following changes to your
file:config/manifests/kustomization.yamlAdd a Kustomize patch to remove the cert-manager
andvolumeobjects from your cluster service version (CSV).volumeMountBecause Operator Lifecycle Manager (OLM) does not yet support cert-manager, a JSON patch was added to remove this volume and mount so OLM can create and manage certificates for your Operator.
In the
file, add the following lines:config/manifests/kustomization.yamlExample 5.18.
config/manifests/kustomization.yamlfilepatchesJson6902: - target: group: apps version: v1 kind: Deployment name: controller-manager namespace: system patch: |- # Remove the manager container's "cert" volumeMount, since OLM will create and mount a set of certs. # Update the indices in this path if adding or removing containers/volumeMounts in the manager's Deployment. - op: remove path: /spec/template/spec/containers/1/volumeMounts/0 # Remove the "cert" volume, since OLM will create and mount a set of certs. # Update the indices in this path if adding or removing volumes in the manager's Deployment. - op: remove path: /spec/template/spec/volumes/0Optional: For Ansible- and Helm-based Operator projects, configure
andansible-operatorwith a component config. To add this option, follow these steps:helm-operatorCreate the following file:
Example 5.19.
config/default/manager_config_patch.yamlfileapiVersion: apps/v1 kind: Deployment metadata: name: controller-manager namespace: system spec: template: spec: containers: - name: manager args: - "--config=controller_manager_config.yaml" volumeMounts: - name: manager-config mountPath: /controller_manager_config.yaml subPath: controller_manager_config.yaml volumes: - name: manager-config configMap: name: manager-configCreate the following file:
Example 5.20.
config/manager/controller_manager_config.yamlfileapiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :6789 metrics: bindAddress: 127.0.0.1:8080 leaderElection: leaderElect: true resourceName: <resource_name>Update the
file by applying the following changes toconfig/default/kustomization.yaml:resourcesExample 5.21.
config/default/kustomization.yamlfileresources: ... - manager_config_patch.yamlUpdate the
file by applying the following changes:config/manager/kustomization.yamlExample 5.22.
config/manager/kustomization.yamlfilegeneratorOptions: disableNameSuffixHash: true configMapGenerator: - files: - controller_manager_config.yaml name: manager-config apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: controller newName: quay.io/example/memcached-operator newTag: v0.0.1
Optional: Add a manager config patch to the
file.config/default/kustomization.yamlThe generated
flag was not added to either the--configoransible-operatorbinary when config file support was originally added, so it does not currently work. Thehelm-operatorflag supports configuration of both binaries by file; this method of configuration only applies to the underlying controller manager and not the Operator as a whole.--configTo optionally configure the Operator’s deployment with a config file, make changes to the
file as shown in the following diff:config/default/kustomization.yamlExample 5.23.
config/default/kustomization.yamlfile diff# If you want your controller-manager to expose the /metrics # endpoint w/o any authn/z, please comment the following line. \- manager_auth_proxy_patch.yaml +# Mount the controller config file for loading manager configurations +# through a ComponentConfig type +- manager_config_patch.yamlFlags can be used as is or to override config file values.
For Ansible- and Helm-based Operator projects, add role rules for leader election by making the following changes to the
file:config/rbac/leader_election_role.yamlExample 5.24.
config/rbac/leader_election_role.yamlfile- apiGroups: - coordination.k8s.io resources: - leases verbs: - get - list - watch - create - update - patch - deleteFor Ansible-based Operator projects, update Ansible collections.
In your
file, change therequirements.ymlfield forversiontocommunity.kubernetes, and the1.2.1field forversiontooperator_sdk.util.0.2.0Make the following changes to your
file:config/default/manager_auth_proxy_patch.yamlFor Ansible-based Operator projects, add the
argument to the--health-probe-bind-address=:6789file:config/default/manager_auth_proxy_patch.yamlExample 5.25.
config/default/manager_auth_proxy_patch.yamlfilespec: template: spec: containers: - name: manager args: - "--health-probe-bind-address=:6789" ...For Helm-based Operator projects:
Add the
argument to the--health-probe-bind-address=:8081file:config/default/manager_auth_proxy_patch.yamlExample 5.26.
config/default/manager_auth_proxy_patch.yamlfilespec: template: spec: containers: - name: manager args: - "--health-probe-bind-address=:8081" ...-
Replace the deprecated flag with
--enable-leader-election, and the deprecated flag--leader-electwith--metrics-addr.--metrics-bind-address
Make the following changes to your
file:config/prometheus/monitor.yamlAdd scheme, token, and TLS config to the Prometheus
metrics endpoint.ServiceMonitorThe
endpoint, while specifying the/metricsport on the manager pod, was not actually configured to serve over HTTPS because nohttpswas set. BecausetlsConfigsecures this endpoint as a manager sidecar, using the service account token mounted into the pod by default corrects this problem.kube-rbac-proxyApply the changes to the
file as shown in the following diff:config/prometheus/monitor.yamlExample 5.27.
config/prometheus/monitor.yamlfile diffspec: endpoints: - path: /metrics port: https + scheme: https + bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + tlsConfig: + insecureSkipVerify: true selector: matchLabels: control-plane: controller-managerNoteIf you removed
from your project, ensure that you secure thekube-rbac-proxyendpoint using a proper TLS configuration./metrics
Ensure that existing dependent resources have owner annotations.
For Ansible-based Operator projects, owner reference annotations on cluster-scoped dependent resources and dependent resources in other namespaces were not applied correctly. A workaround was to add these annotations manually, which is no longer required as this bug has been fixed.
Deprecate support for package manifests.
The Operator Framework is removing support for the Operator package manifest format in a future release. As part of the ongoing deprecation process, the
andoperator-sdk generate packagemanifestscommands are now deprecated. To migrate package manifests to bundles, theoperator-sdk run packagemanifestscommand can be used.operator-sdk pkgman-to-bundleRun the
command and see "Migrating package manifest projects to bundle format" for more details.operator-sdk pkgman-to-bundle --helpUpdate the finalizer names for your Operator.
The finalizer name format suggested by Kubernetes documentation is:
<qualified_group>/<finalizer_name>while the format previously documented for Operator SDK was:
<finalizer_name>.<qualified_group>If your Operator uses any finalizers with names that match the incorrect format, change them to match the official format. For example,
must be changed tofinalizer.cache.example.com.cache.example.com/finalizer
Your Operator project is now compatible with Operator SDK v1.8.0.
5.4. Go-based Operators Copiar enlaceEnlace copiado en el portapapeles!
5.4.1. Getting started with Operator SDK for Go-based Operators Copiar enlaceEnlace copiado en el portapapeles!
To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.4.1.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.1.2. Creating and deploying Go-based Operators Copiar enlaceEnlace copiado en el portapapeles!
You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operatorChange into the project directory:
$ cd memcached-operatorRun the
command to initialize the project:operator-sdk init$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorThe command uses the Go plugin by default.
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind MemcachedBuild and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.4.1.3. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator.
5.4.2. Operator SDK tutorial for Go-based Operators Copiar enlaceEnlace copiado en el portapapeles!
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.
5.4.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.2.2. Creating a project Copiar enlaceEnlace copiado en el portapapeles!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operatorChange to the directory:
$ cd $HOME/projects/memcached-operatorActivate support for Go modules:
$ export GO111MODULE=onRun the
command to initialize the project:operator-sdk init$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorNoteThe
command uses the Go plugin by default.operator-sdk initThe
command generates aoperator-sdk initfile to be used with Go modules. Thego.modflag is required when creating a project outside of--repo, because generated files require a valid module path.$GOPATH/src/
5.4.2.2.1. PROJECT file Copiar enlaceEnlace copiado en el portapapeles!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout: go.kubebuilder.io/v3
projectName: memcached-operator
repo: github.com/example-inc/memcached-operator
version: 3
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
5.4.2.2.2. About the Manager Copiar enlaceEnlace copiado en el portapapeles!
The main program for the Operator is the
main.go
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the
namespace
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
5.4.2.2.3. About multi-group APIs Copiar enlaceEnlace copiado en el portapapeles!
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
$ operator-sdk edit --multigroup=true
This command updates the
PROJECT
domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...
For multi-group projects, the API Go type files are created in the
apis/<group>/<version>/
controllers/<group>/
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
5.4.2.3. Creating an API and controller Copiar enlaceEnlace copiado en el portapapeles!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
, version,cache, and kindv1:Memcached$ operator-sdk create api \ --group=cache \ --version=v1 \ --kind=MemcachedWhen prompted, enter
for creating both the resource and controller:yCreate Resource [y/n] y Create Controller [y/n] yExample output
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...
This process generates the
Memcached
api/v1/memcached_types.go
controllers/memcached_controller.go
5.4.2.3.1. Defining the API Copiar enlaceEnlace copiado en el portapapeles!
Define the API for the
Memcached
Procedure
Modify the Go type definitions at
to have the followingapi/v1/memcached_types.goandspec:status// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` }Update the generated code for the resource type:
$ make generateTipAfter you modify a
file, you must run the*_types.gocommand to update the generated code for that resource type.make generateThe above Makefile target invokes the
utility to update thecontroller-genfile. This ensures your API Go type definitions implement theapi/v1/zz_generated.deepcopy.gointerface that all Kind types must implement.runtime.Object
5.4.2.3.2. Generating CRD manifests Copiar enlaceEnlace copiado en el portapapeles!
After the API is defined with
spec
status
Procedure
Run the following command to generate and update CRD manifests:
$ make manifestsThis Makefile target invokes the
utility to generate the CRD manifests in thecontroller-genfile.config/crd/bases/cache.example.com_memcacheds.yaml
5.4.2.3.2.1. About OpenAPI validation Copiar enlaceEnlace copiado en el portapapeles!
OpenAPIv3 schemas are added to CRD manifests in the
spec.validation
Markers, or annotations, are available to configure validations for your API. These markers always have a
+kubebuilder:validation
5.4.2.4. Implementing the controller Copiar enlaceEnlace copiado en el portapapeles!
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
with following example implementation:controllers/memcached_controller.goExample 5.28. Example
memcached_controller.go/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "reflect" "context" "github.com/go-logr/logr" "k8s.io/apimachinery/pkg/runtime" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" cachev1alpha1 "github.com/example/memcached-operator/api/v1alpha1" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.7.0/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { log := r.Log.WithValues("memcached", req.NamespacedName) // Fetch the Memcached instance memcached := &cachev1alpha1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info("Memcached resource not found. Ignoring since object must be deleted") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, "Failed to get Memcached") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, "Failed to get Deployment") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, "Failed to update Deployment", "Deployment.Namespace", found.Namespace, "Deployment.Name", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, "Failed to update Memcached status") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1alpha1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: "memcached:1.4.36-alpine", Name: "memcached", Command: []string{"memcached", "-m=64", "-o", "modern", "-v"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: "memcached", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{"app": "memcached", "memcached_cr": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1alpha1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }The example controller runs the following reconciliation logic for each
custom resource (CR):Memcached- Create a Memcached deployment if it does not exist.
-
Ensure that the deployment size is the same as specified by the CR spec.
Memcached -
Update the CR status with the names of the
Memcachedpods.memcached
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.4.2.4.1. Resources watched by the controller Copiar enlaceEnlace copiado en el portapapeles!
The
SetupWithManager()
controllers/memcached_controller.go
import (
...
appsv1 "k8s.io/api/apps/v1"
...
)
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
NewControllerManagedBy()
For(&cachev1.Memcached{})
Memcached
Memcached
Request
Memcached
Owns(&appsv1.Deployment{})
Deployment
Deployment
Memcached
5.4.2.4.2. Controller configurations Copiar enlaceEnlace copiado en el portapapeles!
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
option, which defaults toMaxConcurrentReconciles:1func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }- Filter watch events using predicates.
-
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the handler to transform a watch event into an arbitrary set of reconcile requests.
EnqueueRequestsFromMapFunc
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
5.4.2.4.3. Reconcile loop Copiar enlaceEnlace copiado en el portapapeles!
Every controller has a reconciler object with a
Reconcile()
Request
Memcached
import (
ctrl "sigs.k8s.io/controller-runtime"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
...
)
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Lookup the Memcached instance for this reconcile request
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
...
}
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
// Reconcile successful - don't requeue
return ctrl.Result{}, nil
// Reconcile failed due to error - requeue
return ctrl.Result{}, err
// Requeue for any reason other than an error
return ctrl.Result{Requeue: true}, nil
You can set the
Result.RequeueAfter
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return
Result
RequeueAfter
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
5.4.2.4.4. Permissions and RBAC manifests Copiar enlaceEnlace copiado en el portapapeles!
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
...
}
The
ClusterRole
config/rbac/role.yaml
controller-gen
make manifests
5.4.2.5. Running the Operator Copiar enlaceEnlace copiado en el portapapeles!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.
5.4.2.5.1. Running locally outside the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... 2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {"path": "/metrics"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {"reconciler group": "cache.example.com", "reconciler kind": "Memcached"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}
5.4.2.5.2. Running as a deployment on the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacVerify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.4.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
5.4.2.5.3.1. Bundling an Operator Copiar enlaceEnlace copiado en el portapapeles!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.8+ installed
oc - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.4.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.8)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>- 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required objects, including RBAC.InstallPlan
5.4.2.6. Creating a custom resource Copiar enlaceEnlace copiado en el portapapeles!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the CR, installed on a cluster
Memcached
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project memcached-operator-systemEdit the sample
CR manifest atMemcachedto contain the following specification:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Memcached$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get memcached/memcached-sample -o yamlExample output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/cache_v1_memcached.yamlfield in thespec.sizeCR fromMemcachedto3:5$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Clean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.4.3. Project layout for Go-based Operators Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk
5.4.3.1. Go-based project layout Copiar enlaceEnlace copiado en el portapapeles!
Go-based Operator projects, the default type, generated using the
operator-sdk init
| File or directory | Purpose |
|---|---|
|
| Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the
|
|
| Directory tree that defines the APIs of the CRDs. You must edit the
|
|
| Controller implementations. Edit the
|
|
| Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates. |
|
| Targets used to build and deploy your controller. |
|
| Instructions used by a container engine to build your Operator. |
|
| Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment. |
5.5. Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
5.5.1. Getting started with Operator SDK for Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.5.1.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed - Ansible version v2.9.0
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- OpenShift Python client version v0.11.2+
-
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.1.2. Creating and deploying Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir memcached-operatorChange into the project directory:
$ cd memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:ansible$ operator-sdk init \ --plugins=ansible \ --domain=example.com
Create an API.
Create a simple Memcached API:
$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 - 1
- Generates an Ansible role for the API.
Build and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Create a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemExample output
... I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612547325.8819902,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612547325.98242,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612547325.9824686,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4} {"level":"info","ts":1612547348.8311093,"logger":"runner","msg":"Ansible-runner exited successfully","job":"4037200794235010051","name":"memcached-sample","namespace":"memcached-operator-system"}
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.5.1.3. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- See Operator SDK tutorial for Ansible-based Operators for a more in-depth walkthrough on building an Ansible-based Operator.
5.5.2. Operator SDK tutorial for Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:
- Create a Memcached deployment
-
Ensure that the deployment size is the same as specified by the custom resource (CR) spec
Memcached -
Update the CR status using the status writer with the names of the
Memcachedpodsmemcached
This process is accomplished by using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.
5.5.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed - Ansible version v2.9.0
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- OpenShift Python client version v0.11.2+
-
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.2.2. Creating a project Copiar enlaceEnlace copiado en el portapapeles!
Use the Operator SDK CLI to create a project called
memcached-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operatorChange to the directory:
$ cd $HOME/projects/memcached-operatorRun the
command with theoperator-sdk initplugin to initialize the project:ansible$ operator-sdk init \ --plugins=ansible \ --domain=example.com
5.5.2.2.1. PROJECT file Copiar enlaceEnlace copiado en el portapapeles!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout: ansible.sdk.operatorframework.io/v1
projectName: memcached-operator
version: 3
5.5.2.3. Creating an API Copiar enlaceEnlace copiado en el portapapeles!
Use the Operator SDK CLI to create a Memcached API.
Procedure
Run the following command to create an API with group
, version,cache, and kindv1:Memcached$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 - 1
- Generates an Ansible role for the API.
After creating the API, your Operator project updates with the following structure:
- Memcached CRD
-
Includes a sample
Memcachedresource - Manager
Program that reconciles the state of the cluster to the desired state by using:
- A reconciler, either an Ansible role or playbook
-
A file, which connects the
watches.yamlresource to theMemcachedAnsible rolememcached
5.5.2.4. Modifying the manager Copiar enlaceEnlace copiado en el portapapeles!
Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a
Memcached
Procedure
Update the
file with the following structure:roles/memcached/tasks/main.yml--- - name: start memcached community.kubernetes.k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: "{{size}}" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: "docker.io/memcached:1.4.36-alpine" ports: - containerPort: 11211This
role ensures amemcacheddeployment exist and sets the deployment size.memcachedSet default values for variables used in your Ansible role by editing the
file:roles/memcached/defaults/main.yml--- # defaults file for Memcached size: 1Update the
sample resource in theMemcachedfile with the following structure:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.
The names of all variables in the
spec
serviceAccount
service_account
You can disable this case conversion by setting the
snakeCaseParameters
false
watches.yaml
5.5.2.5. Running the Operator Copiar enlaceEnlace copiado en el portapapeles!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... {"level":"info","ts":1612589622.7888272,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612589622.7897573,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612589622.789971,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612589622.7899997,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612589622.8904517,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612589622.8905244,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}
5.5.2.5.2. Running as a deployment on the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacVerify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
5.5.2.5.3.1. Bundling an Operator Copiar enlaceEnlace copiado en el portapapeles!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.8+ installed
oc - Operator project initialized by using the Operator SDK
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.8)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>- 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required objects, including RBAC.InstallPlan
5.5.2.6. Creating a custom resource Copiar enlaceEnlace copiado en el portapapeles!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the CR, installed on a cluster
Memcached
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project memcached-operator-systemEdit the sample
CR manifest atMemcachedto contain the following specification:config/samples/cache_v1_memcached.yamlapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample ... spec: ... size: 3Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Memcached$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get memcached/memcached-sample -o yamlExample output
apiVersion: cache.example.com/v1 kind: Memcached metadata: ... name: memcached-sample ... spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/cache_v1_memcached.yamlfield in thespec.sizeCR fromMemcachedto3:5$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
Clean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.5.3. Project layout for Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk
5.5.3.1. Ansible-based project layout Copiar enlaceEnlace copiado en el portapapeles!
Ansible-based Operator projects generated using the
operator-sdk init --plugins ansible
| File or directory | Purpose |
|---|---|
|
| Dockerfile for building the container image for the Operator. |
|
| Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
|
| YAML file containing metadata information for the Operator. |
|
| Base CRD files and the
|
|
| Collects all Operator manifests for deployment. Use by the
|
|
| Controller manager deployment. |
|
|
|
|
| Role and role binding for leader election and authentication proxy. |
|
| Sample resources created for the CRDs. |
|
| Sample configurations for testing. |
|
| A subdirectory for the playbooks to run. |
|
| Subdirectory for the roles tree to run. |
|
| Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the
|
|
| YAML file containing the Ansible collections and role dependencies to install during a build. |
|
| Molecule scenarios for end-to-end testing of your role and Operator. |
5.5.4. Ansible support in Operator SDK Copiar enlaceEnlace copiado en el portapapeles!
5.5.4.1. Custom resource files Copiar enlaceEnlace copiado en el portapapeles!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the
|
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package
|
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
name: "example"
annotations:
ansible.operator-sdk/reconcile-period: "30s"
5.5.4.2. watches.yaml file Copiar enlaceEnlace copiado en el portapapeles!
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The
watches.yaml
/opt/ansible/watches.yaml
| Field | Description |
|---|---|
|
| Group of CR to watch. |
|
| Version of CR to watch. |
|
| Kind of CR to watch |
|
| Path to the Ansible role added to the container. For example, if your
|
|
| Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the
|
|
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
| When set to
|
Example watches.yaml file
- version: v1alpha1
group: test1.example.com
kind: Test1
role: /opt/ansible/roles/Test1
- version: v1alpha1
group: test2.example.com
kind: Test2
playbook: /opt/ansible/playbook.yml
- version: v1alpha1
group: test3.example.com
kind: Test3
playbook: /opt/ansible/test3.yml
reconcilePeriod: 0
manageStatus: false
5.5.4.2.1. Advanced options Copiar enlaceEnlace copiado en el portapapeles!
Advanced features can be enabled by adding them to your
watches.yaml
group
version
kind
playbook
role
Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.
| Feature | YAML key | Description | Annotation for override | Default value |
|---|---|---|---|---|
| Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
| Manage status |
| Allows the Operator to manage the
|
| |
| Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
| Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
| Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example watches.yml file with advanced options
- version: v1alpha1
group: app.example.com
kind: AppService
playbook: /opt/ansible/playbook.yml
maxRunnerArtifacts: 30
reconcilePeriod: 5s
manageStatus: False
watchDependentResources: False
5.5.4.3. Extra variables sent to Ansible Copiar enlaceEnlace copiado en el portapapeles!
Extra variables can be sent to Ansible, which are then managed by the Operator. The
spec
ansible-playbook
The Operator also passes along additional variables under the
meta
For the following CR example:
apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
name: "example"
spec:
message: "Hello world 2"
newParameter: "newParam"
The structure passed to Ansible as extra variables is:
{ "meta": {
"name": "<cr_name>",
"namespace": "<cr_namespace>",
},
"message": "Hello world 2",
"new_parameter": "newParam",
"_app_example_com_database": {
<full_crd>
},
}
The
message
newParameter
meta
meta
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.5.4.4. Ansible Runner directory Copiar enlaceEnlace copiado en el portapapeles!
Ansible Runner keeps information about Ansible runs in the container. This is located at
/tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>
5.5.5. Kubernetes Collection for Ansible Copiar enlaceEnlace copiado en el portapapeles!
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
5.5.5.1. Installing the Kubernetes Collection for Ansible Copiar enlaceEnlace copiado en el portapapeles!
You can install the Kubernetes Collection for Ansible on your local workstation.
Procedure
Install Ansible 2.9+:
$ sudo dnf install ansibleInstall the OpenShift python client package:
$ pip3 install openshiftInstall the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
$ ansible-galaxy collection install community.kubernetesIf you have already initialized your Operator, you might have a
file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs therequirements.ymlcollection as well as thecommunity.kubernetescollection, which provides modules and plugins for Operator-specific fuctions.operator_sdk.utilTo install the dependent modules from the
file:requirements.yml$ ansible-galaxy collection install -r requirements.yml
5.5.5.2. Testing the Kubernetes Collection locally Copiar enlaceEnlace copiado en el portapapeles!
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Prerequisites
- Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
- Install the Kubernetes Collection for Ansible
Procedure
In your Ansible-based Operator project directory, modify the
file with the Ansible logic that you want. Theroles/<kind>/tasks/main.ymldirectory is created when you use theroles/<kind>/flag while creating an API. The--generate-rolereplaceable matches the kind that you specified for the API.<kind>The following example creates and deletes a config map based on the value of a variable named
:state--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: default1 state: "{{ state }}" ignore_errors: true2 Modify the
file to setroles/<kind>/defaults/main.ymltostateby default:present--- state: presentCreate an Ansible playbook by creating a
file in the top-level of your project directory, and include yourplaybook.ymlrole:<kind>--- - hosts: localhost roles: - <kind>Run the playbook:
$ ansible-playbook playbook.ymlExample output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Verify that the config map was created:
$ oc get configmapsExample output
NAME DATA AGE example-config 0 2m1sRerun the playbook setting
tostate:absent$ ansible-playbook playbook.yml --extra-vars state=absentExample output
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Verify that the config map was deleted:
$ oc get configmaps
5.5.5.3. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.
5.5.6. Using Ansible inside an Operator Copiar enlaceEnlace copiado en el portapapeles!
After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the
watches.yaml
5.5.6.1. Custom resource files Copiar enlaceEnlace copiado en el portapapeles!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the
|
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package
|
Example Ansible-based Operator annotation
apiVersion: "test1.example.com/v1alpha1"
kind: "Test1"
metadata:
name: "example"
annotations:
ansible.operator-sdk/reconcile-period: "30s"
5.5.6.2. Testing an Ansible-based Operator locally Copiar enlaceEnlace copiado en el portapapeles!
You can test the logic inside of an Ansible-based Operator running locally by using the
make run
make run
ansible-operator
watches.yaml
~/.kube/config
k8s
You can customize the roles path by setting the environment variable
ANSIBLE_ROLES_PATH
ansible-roles-path
ANSIBLE_ROLES_PATH
{{current directory}}/roles
Prerequisites
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
Procedure
Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):
$ make installExample output
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdRun the
command:make run$ make runExample output
/home/user/memcached-operator/bin/ansible-operator run {"level":"info","ts":1612739145.2871568,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.8.0","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} ... {"level":"info","ts":1612739148.347306,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612739148.3488882,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612739148.3490262,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612739148.3490646,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612739148.350217,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} {"level":"info","ts":1612739148.3506632,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612739148.350784,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612739148.5511978,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612739148.5512562,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":8}With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.
NoteConsider an example
CR manifest:config/samples/<gvk>.yamlapiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"Because the
field is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.specCreate an instance of your CR with the default variable
set tostate:present$ oc apply -f config/samples/<gvk>.yamlCheck that the
config map was created:example-config$ oc get configmapsExample output
NAME STATUS AGE example-config Active 3sModify your
file to set theconfig/samples/<gvk>.yamlfield tostate. For example:absentapiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absentApply the changes:
$ oc apply -f config/samples/<gvk>.yamlConfirm that the config map is deleted:
$ oc get configmap
5.5.6.3. Testing an Ansible-based Operator on the cluster Copiar enlaceEnlace copiado en el portapapeles!
After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is prefered for production use.
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacVerify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.5.6.4. Ansible logs Copiar enlaceEnlace copiado en el portapapeles!
Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.
5.5.6.4.1. Viewing Ansible logs Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- Ansible-based Operator running as a deployment on a cluster
Procedure
To view logs from an Ansible-based Operator, run the following command:
$ oc logs deployment/<project_name>-controller-manager \ -c manager \1 -n <namespace>2 Example output
{"level":"info","ts":1612732105.0579333,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.8.0","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} {"level":"info","ts":1612732105.0587437,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {"level":"info","ts":1612732107.768025,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"} {"level":"info","ts":1612732107.768796,"logger":"watches","msg":"Environment variable not set; using default value","envVar":"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM","default":2} {"level":"info","ts":1612732107.7688773,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false} {"level":"info","ts":1612732107.7688901,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"cache.example.com","Options.Version":"v1","Options.Kind":"Memcached"} {"level":"info","ts":1612732107.770032,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator... {"level":"info","ts":1612732107.770202,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {"level":"info","ts":1612732107.7850506,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting EventSource","source":"kind source: cache.example.com/v1, Kind=Memcached"} {"level":"info","ts":1612732107.8853772,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting Controller"} {"level":"info","ts":1612732107.8854098,"logger":"controller-runtime.manager.controller.memcached-controller","msg":"Starting workers","worker count":4}
5.5.6.4.2. Enabling full Ansible results in logs Copiar enlaceEnlace copiado en el portapapeles!
You can set the environment variable
ANSIBLE_DEBUG_LOGS
True
Procedure
Edit the
andconfig/manager/manager.yamlfiles to include the following configuration:config/default/manager_auth_proxy_patch.yamlcontainers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"
5.5.6.4.3. Enabling verbose debugging in logs Copiar enlaceEnlace copiado en el portapapeles!
While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.
Procedure
Add the
annotation to your custom resource to enable the verbosity level that you want. For example:ansible.sdk.operatorframework.io/verbosityapiVersion: "cache.example.com/v1alpha1" kind: "Memcached" metadata: name: "example-memcached" annotations: "ansible.sdk.operatorframework.io/verbosity": "4" spec: size: 4
5.5.7. Custom resource status management Copiar enlaceEnlace copiado en el portapapeles!
5.5.7.1. About custom resource status in Ansible-based Operators Copiar enlaceEnlace copiado en el portapapeles!
Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
status:
conditions:
- ansibleResult:
changed: 3
completion: 2018-12-03T13:45:57.13329
failures: 1
ok: 6
skipped: 0
lastTransitionTime: 2018-12-03T13:45:57Z
message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
113] No route to host>'
reason: Failed
status: "True"
type: Failure
- lastTransitionTime: 2018-12-03T13:46:13Z
message: Running reconciliation
reason: Running
status: "True"
type: Running
Ansible-based Operators also allow Operator authors to supply custom status values with the
k8s_status
operator_sdk.util collection. This allows the author to update the status
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
5.5.7.2. Tracking custom resource status manually Copiar enlaceEnlace copiado en el portapapeles!
You can use the
operator_sdk.util
Prerequisites
- Ansible-based Operator project created by using the Operator SDK
Procedure
Update the
file with awatches.yamlfield set tomanageStatus:false- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: falseUse the
Ansible module to update the subresource. For example, to update with keyoperator_sdk.util.k8s_statusand valuetest,datacan be used as shown:operator_sdk.util- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: "{{ ansible_operator_meta.name }}" namespace: "{{ ansible_operator_meta.namespace }}" status: test: dataYou can declare collections in the
file for the role, which is included for scaffolded Ansible-based Operators:meta/main.ymlcollections: - operator_sdk.utilAfter declaring collections in the role meta, you can invoke the
module directly:k8s_statusk8s_status: ... status: key1: value1
5.6. Helm-based Operators Copiar enlaceEnlace copiado en el portapapeles!
5.6.1. Getting started with Operator SDK for Helm-based Operators Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.
5.6.1.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.1.2. Creating and deploying Helm-based Operators Copiar enlaceEnlace copiado en el portapapeles!
You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
$ mkdir nginx-operatorChange into the project directory:
$ cd nginx-operatorRun the
command with theoperator-sdk initplugin to initialize the project:helm$ operator-sdk init \ --plugins=helm
Create an API.
Create a simple Nginx API:
$ operator-sdk create api \ --group demo \ --version v1 \ --kind NginxThis API uses the built-in Helm chart boilerplate from the
command.helm createBuild and push the Operator image.
Use the default
targets to build and push your Operator. SetMakefilewith a pull spec for your image that uses a registry you can push to:IMG$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Run the Operator.
Install the CRD:
$ make installDeploy the project to the cluster. Set
to the image that you pushed:IMG$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
Add a security context constraint (SCC).
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the
pod:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCreate a sample custom resource (CR).
Create a sample CR:
$ oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-systemWatch for the CR to reconcile the Operator:
$ oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-system
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
$ make undeploy
5.6.1.3. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- See Operator SDK tutorial for Helm-based Operators for a more in-depth walkthrough on building a Helm-based Operator.
5.6.2. Operator SDK tutorial for Helm-based Operators Copiar enlaceEnlace copiado en el portapapeles!
Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:
- Create a Nginx deployment
-
Ensure that the deployment size is the same as specified by the custom resource (CR) spec
Nginx -
Update the CR status using the status writer with the names of the
Nginxpodsnginx
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.
5.6.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with with an account that has
ocpermissionscluster-admin - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.2.2. Creating a project Copiar enlaceEnlace copiado en el portapapeles!
Use the Operator SDK CLI to create a project called
nginx-operator
Procedure
Create a directory for the project:
$ mkdir -p $HOME/projects/nginx-operatorChange to the directory:
$ cd $HOME/projects/nginx-operatorRun the
command with theoperator-sdk initplugin to initialize the project:helm$ operator-sdk init \ --plugins=helm \ --domain=example.com \ --group=demo \ --version=v1 \ --kind=NginxNoteBy default, the
plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as thehelmflag, to initialize a project using an existing Helm chart.--helm-chartThe
command creates theinitproject specifically for watching a resource with API versionnginx-operatorand kindexample.com/v1.Nginx-
For Helm-based projects, the command generates the RBAC rules in the
initfile based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.config/rbac/role.yaml
5.6.2.2.1. Existing Helm charts Copiar enlaceEnlace copiado en el portapapeles!
Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:
-
--helm-chart -
--helm-chart-repo -
--helm-chart-version
If the
--helm-chart
--group
--version
--kind
| Flag | Value |
|---|---|
|
|
|
|
|
|
|
|
|
|
| Deduced from the specified chart |
If the
--helm-chart
example-chart-1.2.0.tgz
If a custom repository URL is not specified by the
--helm-chart-repo
| Format | Description |
|---|---|
|
| Fetch the Helm chart named
|
|
| Fetch the Helm chart archive at the specified URL. |
If a custom repository URL is specified by
--helm-chart-repo
| Format | Description |
|---|---|
|
| Fetch the Helm chart named
|
If the
--helm-chart-version
--helm-chart-version
--helm-chart
For more details and examples, run:
$ operator-sdk init --plugins helm --help
5.6.2.2.2. PROJECT file Copiar enlaceEnlace copiado en el portapapeles!
Among the files generated by the
operator-sdk init
PROJECT
operator-sdk
help
domain: example.com
layout: helm.sdk.operatorframework.io/v1
projectName: helm-operator
resources:
- group: demo
kind: Nginx
version: v1
version: 3
5.6.2.3. Understanding the Operator logic Copiar enlaceEnlace copiado en el portapapeles!
For this example, the
nginx-operator
Nginx
- Create an Nginx deployment if it does not exist.
- Create an Nginx service if it does not exist.
- Create an Nginx ingress if it is enabled and does not exist.
-
Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the CR, for example the replica count, image, and service type.
Nginx
By default, the
nginx-operator
Nginx
watches.yaml
# Use the 'create api' subcommand to add watches to this file.
- group: demo
version: v1
kind: Nginx
chart: helm-charts/nginx
# +kubebuilder:scaffold:watch
5.6.2.3.1. Sample Helm chart Copiar enlaceEnlace copiado en el portapapeles!
When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for deployment, service, and ingress resources, along with a
NOTES.txt
If you are not already familiar with Helm charts, review the Helm developer documentation.
5.6.2.3.2. Modifying the custom resource spec Copiar enlaceEnlace copiado en el portapapeles!
Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the
values.yaml
You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.
Procedure
The
file has a value calledhelm-charts/nginx/values.yamlset toreplicaCountby default. To have two Nginx instances in your deployment, your CR spec must contain1.replicaCount: 2Edit the
file to setconfig/samples/demo_v1_nginx.yaml:replicaCount: 2apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 2Similarly, the default service port is set to
. To use80, edit the8080file to setconfig/samples/demo_v1_nginx.yaml,which adds the service port override:spec.port: 8080apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080
The Helm Operator applies the entire spec as if it was the contents of a values file, just like the
helm install -f ./overrides.yaml
5.6.2.4. Running the Operator Copiar enlaceEnlace copiado en el portapapeles!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.4.1. Running locally outside the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
file and run the Operator locally:~/.kube/config$ make install runExample output
... {"level":"info","ts":1612652419.9289865,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1612652419.9296563,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.example.com/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"} {"level":"info","ts":1612652419.929983,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"} {"level":"info","ts":1612652419.930015,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting EventSource","source":"kind source: demo.example.com/v1, Kind=Nginx"} {"level":"info","ts":1612652420.2307851,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting Controller"} {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8}
5.6.2.4.2. Running as a deployment on the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
commands to build and push the Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>NoteThe name and tag of the image, for example
, in both the commands can also be set in your Makefile. Modify theIMG=<registry>/<user>/<image_name>:<tag>value to set your default image name.IMG ?= controller:latest
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>By default, this command creates a namespace with the name of your Operator project in the form
and is used for the deployment. This command also installs the RBAC manifests from<project_name>-system.config/rbacVerify that the Operator is running:
$ oc get deployment -n <project_name>-systemExample output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
5.6.2.4.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
5.6.2.4.3.1. Bundling an Operator Copiar enlaceEnlace copiado en el portapapeles!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.8+ installed
oc - Operator project initialized by using the Operator SDK
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.6.2.4.3.2. Deploying an Operator with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.8)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>- 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required objects, including RBAC.InstallPlan
5.6.2.5. Creating a custom resource Copiar enlaceEnlace copiado en el portapapeles!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Nginx Operator, which provides the CR, installed on a cluster
Nginx
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
command:make deploy$ oc project nginx-operator-systemEdit the sample
CR manifest atNginxto contain the following specification:config/samples/demo_v1_nginx.yamlapiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample ... spec: ... replicaCount: 3The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the
pod:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCreate the CR:
$ oc apply -f config/samples/demo_v1_nginx.yamlEnsure that the
Operator creates the deployment for the sample CR with the correct size:Nginx$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1mCheck the pods and CR status to confirm the status is updated with the Nginx pod names.
Check the pods:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCheck the CR status:
$ oc get nginx/nginx-sample -o yamlExample output
apiVersion: demo.example.com/v1 kind: Nginx metadata: ... name: nginx-sample ... spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update
file to change theconfig/samples/demo_v1_nginx.yamlfield in thespec.sizeCR fromNginxto3:5$ oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=mergeConfirm that the Operator changes the deployment size:
$ oc get deploymentsExample output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m
Clean up the resources that have been created as part of this tutorial.
If you used the
command to test the Operator, run the following command:make deploy$ make undeployIf you used the
command to test the Operator, run the following command:operator-sdk run bundle$ operator-sdk cleanup <project_name>
5.6.3. Project layout for Helm-based Operators Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk
5.6.3.1. Helm-based project layout Copiar enlaceEnlace copiado en el portapapeles!
Helm-based Operator projects generated using the
operator-sdk init --plugins helm
| File/folders | Purpose |
|---|---|
|
| Kustomize manifests for deploying the Operator on a Kubernetes cluster. |
|
| Helm chart initialized with the
|
|
| Used to build the Operator image with the
|
|
| Group/version/kind (GVK) and Helm chart location. |
|
| Targets used to manage the project. |
|
| YAML file containing metadata information for the Operator. |
5.6.4. Helm support in Operator SDK Copiar enlaceEnlace copiado en el portapapeles!
5.6.4.1. Helm charts Copiar enlaceEnlace copiado en el portapapeles!
One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the
spec
values.yaml
helm install -f values.yaml
For an example of a simple CR called
Tomcat
apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
name: example-app
spec:
replicaCount: 2
The
replicaCount
2
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the
oc
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.
5.7. Defining cluster service versions (CSVs) Copiar enlaceEnlace copiado en el portapapeles!
A cluster service version (CSV), defined by a
ClusterServiceVersion
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
5.7.1. How CSV generation works Copiar enlaceEnlace copiado en el portapapeles!
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the
generate bundle
Typically, the
generate kustomize manifests
generate bundle
make bundle
-
generate kustomize manifests -
generate bundle -
bundle validate
5.7.1.1. Generated files and resources Copiar enlaceEnlace copiado en el portapapeles!
The
make bundle
-
A bundle manifests directory named that contains a
bundle/manifests(CSV) objectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
The following resources are typically included in a CSV:
- Role
- Defines Operator permissions within a namespace.
- ClusterRole
- Defines cluster-wide Operator permissions.
- Deployment
- Defines how an Operand of an Operator is run in pods.
- CustomResourceDefinition (CRD)
- Defines custom resources that your Operator reconciles.
- Custom resource examples
- Examples of resources adhering to the spec of a particular CRD.
5.7.1.2. Version management Copiar enlaceEnlace copiado en el portapapeles!
The
--version
generate bundle
By setting the
VERSION
Makefile
--version
generate bundle
make bundle
5.7.2. Manually-defined CSV fields Copiar enlaceEnlace copiado en el portapapeles!
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
| Field | Description |
|---|---|
|
| A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example
|
|
| The capability level according to the Operator maturity model. Options include
|
|
| A public name to identify the Operator. |
|
| A short description of the functionality of the Operator. |
|
| Keywords describing the Operator. |
|
| Human or organizational entities maintaining the Operator, with a
|
|
| The provider of the Operator (usually an organization), with a
|
|
| Key-value pairs to be used by Operator internals. |
|
| Semantic version of the Operator, for example
|
|
| Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
| Field | Description |
|---|---|
|
| The name of the CSV being replaced by this CSV. |
|
| URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a
|
|
| Selectors by which the Operator can pair resources in a cluster. |
|
| A base64-encoded icon unique to the Operator, set in a
|
|
| The level of maturity the software has achieved at this version. Options include
|
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.
5.7.2.1. Operator metadata annotations Copiar enlaceEnlace copiado en el portapapeles!
Operator developers can manually define certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub.
The following table lists Operator metadata annotations that can be manually defined using
metadata.annotations
| Field | Description |
|---|---|
|
| Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
| Specify a single required custom resource by adding
|
|
| Set a suggested namespace where the Operator should be deployed. |
|
| Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:
Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
|
|
| Free-form array for listing any specific subscriptions that are required to use the Operator. For example,
|
|
| Hides CRDs in the UI that are not meant for user manipulation. |
Example use cases
Operator supports disconnected and proxy-aware
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
Operator requires an OpenShift Container Platform license
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
Operator requires a 3scale license
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
Operator supports disconnected and proxy-aware, and requires an OpenShift Container Platform license
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
5.7.3. Enabling your Operator for restricted network environments Copiar enlaceEnlace copiado en el portapapeles!
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Operator requirements for supporting disconnected mode
In the cluster service version (CSV) of your Operator:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
- All dependencies of your Operator must also support running in a disconnected mode.
- Your Operator must not require any off-cluster resources.
For the CSV requirements, you can make the following changes as the Operator author.
Prerequisites
- An Operator project with a CSV.
Procedure
Use SHA references to related images in two places in the CSV for your Operator:
Update
:spec.relatedImages... spec: relatedImages:1 - name: etcd-operator2 image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e43 - name: etcd-image image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 ...Update the
section in the deployment when declaring environment variables that inject the image that the Operator should use:envspec: install: spec: deployments: - name: etcd-operator-v3.1.1 spec: replicas: 1 selector: matchLabels: name: etcd-operator strategy: type: Recreate template: metadata: labels: name: etcd-operator spec: containers: - args: - /opt/etcd/bin/etcd_operator_run.sh env: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.annotations['olm.targetNamespaces'] - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE1 value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb682 - name: ETCD_LOG_LEVEL value: INFO image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e43 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /healthy port: 8080 initialDelaySeconds: 10 periodSeconds: 30 name: etcd-operator readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 10 periodSeconds: 30 resources: {} serviceAccountName: etcd-operator strategy: deploymentNoteWhen configuring probes, the
value must be lower than thetimeoutSecondsvalue. TheperiodSecondsdefault value istimeoutSeconds. The1default value isperiodSeconds.10
Add the
annotation, which indicates that the Operator works in a disconnected environment:disconnectedmetadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'Operators can be filtered in OperatorHub by this infrastructure feature.
5.7.4. Enabling your Operator for multiple architectures and operating systems Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the
PackageManifest
If a CSV does not include an
os
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an
arch
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in the
of your CSV for each supported architecture and operating system that your Operator supports:metadata.labelslabels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported1 operatorframework.io/arch.amd64: supported2
5.7.4.1. Architecture and operating system support for Operators Copiar enlaceEnlace copiado en el portapapeles!
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
| Architecture | String |
|---|---|
| AMD64 |
|
| 64-bit PowerPC little-endian |
|
| IBM Z |
|
| Operating system | String |
|---|---|
| Linux |
|
| z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
5.7.5. Setting a suggested namespace Copiar enlaceEnlace copiado en el portapapeles!
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
annotation to your suggested namespace:operatorframework.io/suggested-namespacemetadata: annotations: operatorframework.io/suggested-namespace: <namespace>1 - 1
- Set your suggested namespace.
5.7.6. Enabling Operator conditions Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an
OperatorCondition
OperatorCondition
To support Operator conditions, an Operator must be able to read the
OperatorCondition
- Get the specific condition.
- Set the status of a specific condition.
This can be accomplished by using the operator-lib library. An Operator author can provide a controller-runtime client in their Operator for the library to access the
OperatorCondition
The library provides a generic
Conditions
Get
Set
conditionType
OperatorCondition
Get-
To get the specific condition, the library uses the
client.Getfunction fromcontroller-runtime, which requires anObjectKeyof typetypes.NamespacedNamepresent inconditionAccessor. Set-
To update the status of the specific condition, the library uses the
client.Updatefunction fromcontroller-runtime. An error occurs if theconditionTypeis not present in the CRD.
The Operator is allowed to modify only the
status
status.conditions
Operator SDK v1.8.0 supports
operator-lib
Prerequisites
- An Operator project generated using the Operator SDK.
Procedure
To enable Operator conditions in your Operator project:
In the
file of your Operator project, addgo.modas a required library:operator-framework/operator-libmodule github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )Write your own constructor in your Operator logic that will result in the following outcomes:
-
Accepts a client.
controller-runtime -
Accepts a .
conditionType -
Returns a interface to update or add conditions.
Condition
Because OLM currently supports the
condition, you can create an interface that has methods to access theUpgradeablecondition. For example:Upgradeableimport ( ... apiv1 "github.com/operator-framework/api/pkg/operators/v1" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, "apiv1.OperatorUpgradeable") } cond, err := NewUpgradeable(cl);In this example, the
constructor is further used to create a variableNewUpgradeableof typecond. TheConditionvariable would in turn havecondandGetmethods, which can be used for handling the OLMSetcondition.Upgradeable-
Accepts a
5.7.7. Defining webhooks Copiar enlaceEnlace copiado en el portapapeles!
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a
webhookdefinitions
- Admission webhooks (validating and mutating)
- Conversion webhooks
Procedure
Add a
section to thewebhookdefinitionssection of the CSV of your Operator and include any webhook definitions using aspecoftype,ValidatingAdmissionWebhook, orMutatingAdmissionWebhook. The following example contains all three types of webhooks:ConversionWebhookCSV containing webhooks
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io1 version: v1 install: spec: deployments: - name: webhook-operator-webhook ... ... ... strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io5 ...
5.7.7.1. Webhook considerations for OLM Copiar enlaceEnlace copiado en el portapapeles!
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
-
The field must be set to either
type,ValidatingAdmissionWebhook, orMutatingAdmissionWebhook, or the CSV will be placed in a failed phase.ConversionWebhook -
The CSV must contain a deployment whose name is equivalent to the value supplied in the field of the
deploymentName.webhookdefinition
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
Certificate authority constraints
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
-
The TLS certificate file is mounted to the deployment at .
/apiserver.local.config/certificates/apiserver.crt -
The TLS key file is mounted to the deployment at .
/apiserver.local.config/certificates/apiserver.key
Admission webhook rules constraints
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
- Requests that target all groups
-
Requests that target the group
operators.coreos.com -
Requests that target the or
ValidatingWebhookConfigurationsresourcesMutatingWebhookConfigurations
Conversion webhook constraints
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
-
CSVs featuring a conversion webhook can only support the install mode.
AllNamespaces -
The CRD targeted by the conversion webhook must have its field set to
spec.preserveUnknownFieldsorfalse.nil - The conversion webhook defined in the CSV must target an owned CRD.
- There can only be one conversion webhook on the entire cluster for a given CRD.
5.7.8. Understanding your custom resource definitions (CRDs) Copiar enlaceEnlace copiado en el portapapeles!
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
5.7.8.1. Owned CRDs Copiar enlaceEnlace copiado en el portapapeles!
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of your CRD. | Required |
|
| The version of that object API. | Required |
|
| The machine readable name of your CRD. | Required |
|
| A human readable version of your CRD name, for example
| Required |
|
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
| The API group that this CRD belongs to, for example
| Optional |
|
| Your CRDs own one or more types of Kubernetes objects. These are listed in the
It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a
MongoDB Standalone
Example owned CRD
- displayName: MongoDB Standalone
group: mongodb.com
kind: MongoDbStandalone
name: mongodbstandalones.mongodb.com
resources:
- kind: Service
name: ''
version: v1
- kind: StatefulSet
name: ''
version: v1beta2
- kind: Pod
name: ''
version: v1
- kind: ConfigMap
name: ''
version: v1
specDescriptors:
- description: Credentials for Ops Manager or Cloud Manager.
displayName: Credentials
path: credentials
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
- description: Project this deployment belongs to.
displayName: Project
path: project
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
- description: MongoDB version to be installed.
displayName: Version
path: version
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:label'
statusDescriptors:
- description: The status of each of the pods for the MongoDB cluster.
displayName: Pod Status
path: pods
x-descriptors:
- 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
version: v1
description: >-
MongoDB Deployment consisting of only one host. No replication of
data.
5.7.8.2. Required CRDs Copiar enlaceEnlace copiado en el portapapeles!
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of the CRD you require. | Required |
|
| The version of that object API. | Required |
|
| The Kubernetes object kind. | Required |
|
| A human readable version of the CRD. | Required |
|
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
required:
- name: etcdclusters.etcd.database.coreos.com
version: v1beta2
kind: EtcdCluster
displayName: etcd Cluster
description: Represents a cluster of etcd nodes.
5.7.8.3. CRD upgrades Copiar enlaceEnlace copiado en el portapapeles!
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.8.3.1. Adding a new CRD version Copiar enlaceEnlace copiado en el portapapeles!
Procedure
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the
section of your CSV.versionsFor example, if the current CRD has a version
and you want to add a new versionv1alpha1and mark it as the new storage version, add a new entry forv1beta1:v1beta1versions: - name: v1alpha1 served: true storage: false - name: v1beta11 served: true storage: true- 1
- New entry.
Ensure the referencing version of the CRD in the
section of your CSV is updated if the CSV intends to use the new version:ownedcustomresourcedefinitions: owned: - name: cluster.example.com version: v1beta11 kind: cluster displayName: Cluster- 1
- Update the
version.
- Push the updated CRD and CSV to your bundle.
5.7.8.3.2. Deprecating or removing a CRD version Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the
served
false
Procedure
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions: - name: v1alpha1 served: false1 storage: true- 1
- Set to
false.
Switch the
version to a serving version if the version to be deprecated is currently thestorageversion. For example:storageversions: - name: v1alpha1 served: false storage: false1 - name: v1beta1 served: true storage: true2 NoteTo remove a specific version that is or was the
version from a CRD, that version must be removed from thestoragein the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.storedVersion- Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions: - name: v1beta1 served: true storage: true-
Ensure the referencing CRD version in the section of your CSV is updated accordingly if that version is removed from the CRD.
owned
5.7.8.4. CRD templates Copiar enlaceEnlace copiado en el portapapeles!
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named
alm-examples
The annotation consists of a list of the kind, for example, the CRD name and the corresponding
metadata
spec
The following full example provides templates for
EtcdCluster
EtcdBackup
EtcdRestore
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
5.7.8.5. Hiding internal objects Copiar enlaceEnlace copiado en el portapapeles!
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a
Replication
replication: true
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the
operators.operatorframework.io/internal-objects
Procedure
-
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or block of your CR, if applicable to your Operator.
spec Add the
annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:operators.operatorframework.io/internal-objectsInternal object annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]'1 ...- 1
- Set any internal CRDs as an array of strings.
5.7.8.6. Initializing required custom resources Copiar enlaceEnlace copiado en el portapapeles!
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding
operatorframework.io/initialization-resource
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Procedure
Add the
annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of aoperatorframework.io/initialization-resourceresource and provides a full YAML definition:StorageClusterInitialization resource annotation
apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { "apiVersion": "ocs.openshift.io/v1", "kind": "StorageCluster", "metadata": { "name": "example-storagecluster" }, "spec": { "manageNodes": false, "monPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } }, "storageClassName": "gp2" } }, "storageDeviceSets": [ { "count": 3, "dataPVCTemplate": { "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "1Ti" } }, "storageClassName": "gp2", "volumeMode": "Block" } }, "name": "example-deviceset", "placement": {}, "portable": true, "resources": {} } ] } } ...
5.7.9. Understanding your API services Copiar enlaceEnlace copiado en el portapapeles!
As with CRDs, there are two types of API services that your Operator may use: owned and required.
5.7.9.1. Owned API services Copiar enlaceEnlace copiado en el portapapeles!
When a CSV owns an API service, it is responsible for describing the deployment of the extension
api-server
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
| Field | Description | Required/optional |
|---|---|---|
|
| Group that the API service provides, for example
| Required |
|
| Version of the API service, for example
| Required |
|
| A kind that the API service is expected to provide. | Required |
|
| The plural name for the API service provided. | Required |
|
| Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the
| Required |
|
| A human readable version of your API service name, for example
| Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
|
| Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| Essentially the same as for owned CRDs. | Optional |
5.7.9.1.1. API service resource creation Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
-
Service pod selectors are copied from the CSV deployment matching the field of the API service description.
DeploymentName - A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.9.1.2. API service serving certificates Copiar enlaceEnlace copiado en el portapapeles!
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated
Service
The certificate is stored as a type
kubernetes.io/tls
apiservice-cert
DeploymentName
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to
/apiserver.local.config/certificates
5.7.9.2. Required API services Copiar enlaceEnlace copiado en el portapapeles!
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
| Field | Description | Required/optional |
|---|---|---|
|
| Group that the API service provides, for example
| Required |
|
| Version of the API service, for example
| Required |
|
| A kind that the API service is expected to provide. | Required |
|
| A human readable version of your API service name, for example
| Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
5.8. Working with bundle images Copiar enlaceEnlace copiado en el portapapeles!
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
5.8.1. Bundling an Operator Copiar enlaceEnlace copiado en el portapapeles!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI () v4.8+ installed
oc - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
commands in your Operator project directory to build and push your Operator image. Modify themakeargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.IMGBuild the image:
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>NoteThe Dockerfile generated by the SDK for the Operator explicitly references
forGOARCH=amd64. This can be amended togo buildfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified byGOARCH=$TARGETARCH. With Buildah, the–platformwill need to be used for the purpose. For more information, see Multiple Architectures.–build-argPush the image to a repository:
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
Create your Operator bundle manifest by running the
command, which invokes several commands, including the Operator SDKmake bundleandgenerate bundlesubcommands:bundle validate$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Bundle manifests for an Operator describe how to display, create, and manage an application. The
command creates the following files and directories in your Operator project:make bundle-
A bundle manifests directory named that contains a
bundle/manifestsobjectClusterServiceVersion -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a directory
config/crd -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
to ensure the on-disk bundle representation is correct.operator-sdk bundle validate-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
with the details for the registry, user namespace, and image tag where you intend to push the image:BUNDLE_IMG$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Push the bundle image:
$ docker push <registry>/<user>/<bundle_image_name>:<tag>
5.8.2. Deploying an Operator with Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (
oc
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.8)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>- 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an ,
OperatorGroup,Subscription, and all other required objects, including RBAC.InstallPlan
5.8.3. Publishing a catalog containing a bundled Operator Copiar enlaceEnlace copiado en el portapapeles!
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the
opm
opm
opm
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use CRDs, for example OpenShift Container Platform 4.8)
apiextensions.k8s.io/v1 -
Logged in to the cluster with using an account with
ocpermissionscluster-admin
Procedure
Run the following
command in your Operator project directory to build an index image containing your Operator bundle:make$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>where the
argument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.CATALOG_IMGPush the built index image to a repository:
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>TipYou can use Operator SDK
commands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:make$ make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec>Alternatively, you can set the
field in yourIMAGE_TAG_BASEto an existing repository:MakefileIMAGE_TAG_BASE=quay.io/example/my-operatorYou can then use the following syntax to build and push images with automatically-generated names, such as
for the bundle image andquay.io/example/my-operator-bundle:v0.0.1for the index image:quay.io/example/my-operator-catalog:v0.0.1$ make bundle-build bundle-push catalog-build catalog-pushDefine a
object that references the index image you just generated, and then create the object by using theCatalogSourcecommand or web console:oc applyExample
CatalogSourceYAMLapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: default spec: displayName: My Test publisher: Company sourceType: grpc image: quay.io/example/memcached-catalog:v0.0.11 updateStrategy: registryPoll: interval: 10m- 1
- Set
imageto the image pull spec you used previously with theCATALOG_IMGargument.
Check the catalog source:
$ oc get catalogsourceExample output
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m
Verification
Install the Operator using your catalog:
Define an
object and create it by using theOperatorGroupcommand or web console:oc applyExample
OperatorGroupYAMLapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: default spec: targetNamespaces: - defaultDefine a
object and create it by using theSubscriptioncommand or web console:oc applyExample
SubscriptionYAMLapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: default spec: channel: "alpha" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: default startingCSV: memcached-operator.v0.0.1
Verify the installed Operator is running:
Check the Operator group:
$ oc get ogExample output
NAME AGE my-test 4h40mCheck the cluster service version (CSV):
$ oc get csvExample output
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 SucceededCheck the pods for the Operator:
$ oc get podsExample output
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m
5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager Copiar enlaceEnlace copiado en el portapapeles!
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The
run bundle-upgrade
Prerequisites
-
Operator installed with OLM either by using the subcommand or with traditional OLM installation
run bundle - A bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
subcommand or with traditional OLM installation.run bundleNoteIf the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
subcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).run bundle-upgradeFor example, you can use the following
subcommand for a Memcached Operator by specifying the earlier bundle image:run bundle$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1Example output
INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-1 INFO[0009] Created CatalogSource: memcached-operator-catalog INFO[0010] OperatorGroup "operator-sdk-og" created INFO[0010] Created Subscription: memcached-operator-v0-0-1-sub INFO[0013] Approved InstallPlan install-bqggr for the Subscription: memcached-operator-v0-0-1-sub INFO[0013] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to reach 'Succeeded' phase INFO[0013] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.1" to appear INFO[0019] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.1" phase: SucceededUpgrade the installed Operator by specifying the bundle image for the later Operator version:
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2Example output
INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name "quay-io-demo-memcached-operator-v0-0-1" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion "my-project/memcached-operator.v0.0.2" to reach 'Succeeded' phase INFO[0042] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: InstallReady INFO[0043] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Installing INFO[0044] Found ClusterServiceVersion "my-project/memcached-operator.v0.0.2" phase: Succeeded INFO[0044] Successfully upgraded to "memcached-operator.v0.0.2"Clean up the installed Operators:
$ operator-sdk cleanup memcached-operator
5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions Copiar enlaceEnlace copiado en el portapapeles!
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.
As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators running on OpenShift Container Platform 4.8 and later to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
APIRemovedInNextReleaseInUse- APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse- APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
annotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:olm.maxOpenShiftVersionImportantYou must use
annotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, cluster admins may uninstall your Operator and can upgrade the cluster version.olm.maxOpenShiftVersionExample CSV with
olm.maxOpenShiftVersionannotationapiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'1 - 1
- Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting
valueto4.8prevents cluster upgrades to OpenShift Container Platform versions later than 4.8 when this bundle is installed on a cluster.
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
NoteThis step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".
Set the
annotation in your project’scom.redhat.openshift.versionsfile:bundle/metadata/annotations.yamlExample
bundle/metadata/annotations.yamlfile with compatible versionscom.redhat.openshift.versions: "v4.6-v4.8"1 - 1
- Set to a range or single version.
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper
label in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thecom.redhat.openshift.versionsfile:bundle.DockerfileExample
bundle.Dockerfilewith compatible versionsLABEL com.redhat.openshift.versions="<versions>"1 - 1
- Set to a range or single version, for example,
v4.6-v4.8. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
5.9. Validating Operators using the scorecard tool Copiar enlaceEnlace copiado en el portapapeles!
As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:
- Validate that your Operator project is free of syntax errors and packaged correctly
- Review suggestions about ways you can improve your Operator
5.9.1. About the scorecard tool Copiar enlaceEnlace copiado en el portapapeles!
While the Operator SDK
bundle validate
scorecard
The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OpenShift Container Platform. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.
Scorecard workflow
- Create all resources required by any related custom resources (CRs) and the Operator
- Create a proxy container in the deployment of the Operator to record calls to the API server and run tests
- Examine parameters in the CRs
The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.
scorecard command syntax
$ operator-sdk scorecard <bundle_dir_or_image> [flags]
The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.
For further information about the flags, run:
$ operator-sdk scorecard -h
5.9.2. Scorecard configuration Copiar enlaceEnlace copiado en el portapapeles!
The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named
config.yaml
make bundle
bundle/
./bundle
...
└── tests
└── scorecard
└── config.yaml
Example scorecard configuration file
kind: Configuration
apiversion: scorecard.operatorframework.io/v1alpha3
metadata:
name: config
stages:
- parallel: true
tests:
- image: quay.io/operator-framework/scorecard-test:v1.8.0
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
- image: quay.io/operator-framework/scorecard-test:v1.8.0
entrypoint:
- scorecard-test
- olm-bundle-validation
labels:
suite: olm
test: olm-bundle-validation-test
The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:
| Configuration field | Description |
|---|---|
|
| Test container image name that implements a test |
|
| Command and arguments that are invoked in the test image to execute a test |
|
| Scorecard-defined or custom labels that select which tests to run |
5.9.3. Built-in scorecard tests Copiar enlaceEnlace copiado en el portapapeles!
The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.
| Test | Description | Short name |
|---|---|---|
| Spec Block Exists | This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a
|
|
| Test | Description | Short name |
|---|---|---|
| Bundle Validation | This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
| Provided APIs Have Validation | This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each
|
|
| Owned CRDs Have Resources Listed | This test makes sure that the CRDs for each CR provided via the
|
|
| Spec Fields With Descriptors | This test verifies that every field in the CRs
|
|
| Status Fields With Descriptors | This test verifies that every field in the CRs
|
|
5.9.4. Running the scorecard tool Copiar enlaceEnlace copiado en el portapapeles!
A default set of Kustomize files are generated by the Operator SDK after running the
init
bundle/tests/scorecard/config.yaml
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
Generate or regenerate your bundle manifests and metadata for your Operator:
$ make bundleThis command automatically adds scorecard annotations to your bundle metadata, which is used by the
command to run tests.scorecardRun the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:
$ operator-sdk scorecard <bundle_dir_or_image>
5.9.5. Scorecard output Copiar enlaceEnlace copiado en el portapapeles!
The
--output
scorecard
text
json
Example 5.29. Example JSON output snippet
{
"apiVersion": "scorecard.operatorframework.io/v1alpha3",
"kind": "TestList",
"items": [
{
"kind": "Test",
"apiVersion": "scorecard.operatorframework.io/v1alpha3",
"spec": {
"image": "quay.io/operator-framework/scorecard-test:v1.8.0",
"entrypoint": [
"scorecard-test",
"olm-bundle-validation"
],
"labels": {
"suite": "olm",
"test": "olm-bundle-validation-test"
}
},
"status": {
"results": [
{
"name": "olm-bundle-validation",
"log": "time=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found metadata directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Found annotations file\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test\n",
"state": "pass"
}
]
}
}
]
}
Example 5.30. Example text output snippet
--------------------------------------------------------------------------------
Image: quay.io/operator-framework/scorecard-test:v1.8.0
Entrypoint: [scorecard-test olm-bundle-validation]
Labels:
"suite":"olm"
"test":"olm-bundle-validation-test"
Results:
Name: olm-bundle-validation
State: pass
Log:
time="2020-07-15T03:19:02Z" level=debug msg="Found manifests directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=debug msg="Found metadata directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=debug msg="Getting mediaType info from manifests directory" name=bundle-test
time="2020-07-15T03:19:02Z" level=info msg="Found annotations file" name=bundle-test
time="2020-07-15T03:19:02Z" level=info msg="Could not find optional dependencies file" name=bundle-test
The output format spec matches the Test type layout.
5.9.6. Selecting tests Copiar enlaceEnlace copiado en el portapapeles!
Scorecard tests are selected by setting the
--selector
Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.
Procedure
To select a single test, for example
, specify the test by using thebasic-check-spec-testflag:--selector$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-testTo select a suite of tests, for example
, specify a label that is used by all of the OLM tests:olm$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olmTo select multiple tests, specify the test names by using the
flag using the following syntax:selector$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'
5.9.7. Enabling parallel testing Copiar enlaceEnlace copiado en el portapapeles!
As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable
parallel
By default, or when a stage explicitly sets
parallel
false
However, if tests are designed to be fully isolated, they can be parallelized.
Procedure
To run a set of isolated tests in parallel, include them in the same stage and set
toparallel:trueapiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.8.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.8.0 labels: suite: olm test: olm-bundle-validation-test- 1
- Enables parallel testing
All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.
5.9.8. Custom scorecard tests Copiar enlaceEnlace copiado en el portapapeles!
The scorecard tool can run custom tests that follow these mandated conventions:
- Tests are implemented within a container image
- Tests accept an entrypoint which include a command and arguments
-
Tests produce scorecard output in JSON format with no extraneous logging in the test output
v1alpha3 -
Tests can obtain the bundle contents at a shared mount point of
/bundle - Tests can access the Kubernetes API using an in-cluster client connection
Writing custom tests in other programming languages is possible if the test image follows the above guidelines.
The following example shows of a custom test image written in Go:
Example 5.31. Example custom scorecard test
// Copyright 2020 The Operator-SDK Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"encoding/json"
"fmt"
"log"
"os"
scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
apimanifests "github.com/operator-framework/api/pkg/manifests"
)
// This is the custom scorecard test example binary
// As with the Redhat scorecard test image, the bundle that is under
// test is expected to be mounted so that tests can inspect the
// bundle contents as part of their test implementations.
// The actual test is to be run is named and that name is passed
// as an argument to this binary. This argument mechanism allows
// this binary to run various tests all from within a single
// test image.
const PodBundleRoot = "/bundle"
func main() {
entrypoint := os.Args[1:]
if len(entrypoint) == 0 {
log.Fatal("Test name argument is required")
}
// Read the pod's untar'd bundle from a well-known path.
cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot)
if err != nil {
log.Fatal(err.Error())
}
var result scapiv1alpha3.TestStatus
// Names of the custom tests which would be passed in the
// `operator-sdk` command.
switch entrypoint[0] {
case CustomTest1Name:
result = CustomTest1(cfg)
case CustomTest2Name:
result = CustomTest2(cfg)
default:
result = printValidTests()
}
// Convert scapiv1alpha3.TestResult to json.
prettyJSON, err := json.MarshalIndent(result, "", " ")
if err != nil {
log.Fatal("Failed to generate json", err)
}
fmt.Printf("%s\n", string(prettyJSON))
}
// printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are.
func printValidTests() scapiv1alpha3.TestStatus {
result := scapiv1alpha3.TestResult{}
result.State = scapiv1alpha3.FailState
result.Errors = make([]string, 0)
result.Suggestions = make([]string, 0)
str := fmt.Sprintf("Valid tests for this image include: %s %s",
CustomTest1Name,
CustomTest2Name)
result.Errors = append(result.Errors, str)
return scapiv1alpha3.TestStatus{
Results: []scapiv1alpha3.TestResult{result},
}
}
const (
CustomTest1Name = "customtest1"
CustomTest2Name = "customtest2"
)
// Define any operator specific custom tests here.
// CustomTest1 and CustomTest2 are example test functions. Relevant operator specific
// test logic is to be implemented in similarly.
func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
r := scapiv1alpha3.TestResult{}
r.Name = CustomTest1Name
r.State = scapiv1alpha3.PassState
r.Errors = make([]string, 0)
r.Suggestions = make([]string, 0)
almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
if almExamples == "" {
fmt.Println("no alm-examples in the bundle CSV")
}
return wrapResult(r)
}
func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
r := scapiv1alpha3.TestResult{}
r.Name = CustomTest2Name
r.State = scapiv1alpha3.PassState
r.Errors = make([]string, 0)
r.Suggestions = make([]string, 0)
almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
if almExamples == "" {
fmt.Println("no alm-examples in the bundle CSV")
}
return wrapResult(r)
}
func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus {
return scapiv1alpha3.TestStatus{
Results: []scapiv1alpha3.TestResult{r},
}
}
5.10. Configuring built-in monitoring with Prometheus Copiar enlaceEnlace copiado en el portapapeles!
This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.
5.10.1. Prometheus Operator support Copiar enlaceEnlace copiado en el portapapeles!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
5.10.2. Metrics helper Copiar enlaceEnlace copiado en el portapapeles!
In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:
func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)
These metrics are inherited from the
controller-runtime
0.0.0.0:8383/metrics
A
Service
Service
root
The following example is present in the
cmd/manager/main.go
import(
"github.com/operator-framework/operator-sdk/pkg/metrics"
"machine.openshift.io/controller-runtime/pkg/manager"
)
var (
// Change the below variables to serve metrics on a different host or port.
metricsHost = "0.0.0.0"
metricsPort int32 = 8383
)
...
func main() {
...
// Pass metrics address to controller-runtime manager
mgr, err := manager.New(cfg, manager.Options{
Namespace: namespace,
MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort),
})
...
// Create Service object to expose the metrics port.
_, err = metrics.ExposeMetricsPort(ctx, metricsPort)
if err != nil {
// handle error
log.Info(err.Error())
}
...
}
5.10.2.1. Modifying the metrics port Copiar enlaceEnlace copiado en el portapapeles!
Operator authors can modify the port that metrics are exposed on.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
In the
file of the generated Operator, change the value ofcmd/manager/main.goin the following line:metricsPortvar metricsPort int32 = 8383
5.10.3. Service monitors Copiar enlaceEnlace copiado en el portapapeles!
A
ServiceMonitor
Endpoints
Service
In Go-based Operators generated using the Operator SDK, the
GenerateServiceMonitor()
Service
ServiceMonitor
5.10.3.1. Creating service monitors Copiar enlaceEnlace copiado en el portapapeles!
Operator authors can add service target discovery of created monitoring services using the
metrics.CreateServiceMonitor()
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
Add the
helper function to your Operator code:metrics.CreateServiceMonitor()import( "k8s.io/api/core/v1" "github.com/operator-framework/operator-sdk/pkg/metrics" "machine.openshift.io/controller-runtime/pkg/client/config" ) func main() { ... // Populate below with the Service(s) for which you want to create ServiceMonitors. services := []*v1.Service{} // Create one ServiceMonitor per application per namespace. // Change the below value to name of the Namespace you want the ServiceMonitor to be created in. ns := "default" // restConfig is used for talking to the Kubernetes apiserver restConfig := config.GetConfig() // Pass the Service(s) to the helper function, which in turn returns the array of ServiceMonitor objects. serviceMonitors, err := metrics.CreateServiceMonitors(restConfig, ns, services) if err != nil { // Handle errors here. } ... }
5.11. Configuring leader election Copiar enlaceEnlace copiado en el portapapeles!
During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own trade-off:
- Leader-for-life
-
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, the
pod-eviction-timeoutdictates long how it takes for the leader pod to be deleted from the node and step down, with a default of5m. See the Leader-for-life Go documentation for more. - Leader-with-lease
- The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.
By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.
5.11.1. Operator leader election examples Copiar enlaceEnlace copiado en el portapapeles!
The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.
5.11.1.1. Leader-for-life election Copiar enlaceEnlace copiado en el portapapeles!
With the Leader-for-life election implementation, a call to
leader.Become()
memcached-operator-lock
import (
...
"github.com/operator-framework/operator-sdk/pkg/leader"
)
func main() {
...
err = leader.Become(context.TODO(), "memcached-operator-lock")
if err != nil {
log.Error(err, "Failed to retry for leader lock")
os.Exit(1)
}
...
}
If the Operator is not running inside a cluster,
leader.Become()
5.11.1.2. Leader-with-lease election Copiar enlaceEnlace copiado en el portapapeles!
The Leader-with-lease implementation can be enabled using the Manager Options for leader election:
import (
...
"sigs.k8s.io/controller-runtime/pkg/manager"
)
func main() {
...
opts := manager.Options{
...
LeaderElection: true,
LeaderElectionID: "memcached-operator-lock"
}
mgr, err := manager.New(cfg, opts)
...
}
When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator to create the config map for leader election. You can override this namespace by setting the
LeaderElectionNamespace
5.12. Migrating package manifest projects to bundle format Copiar enlaceEnlace copiado en el portapapeles!
Support for the legacy package manifest format for Operators is removed in OpenShift Container Platform 4.8 and later. If you have an Operator project that was initially created using the package manifest format, you can use the Operator SDK to migrate the project to the bundle format. The bundle format is the preferred packaging format for Operator Lifecycle Manager (OLM) starting in OpenShift Container Platform 4.6.
5.12.1. About packaging format migration Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK
pkgman-to-bundle
For example, consider the following
packagemanifests/
Example package manifest format layout
packagemanifests/
└── etcd
├── 0.0.1
│ ├── etcdcluster.crd.yaml
│ └── etcdoperator.clusterserviceversion.yaml
├── 0.0.2
│ ├── etcdbackup.crd.yaml
│ ├── etcdcluster.crd.yaml
│ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
│ └── etcdrestore.crd.yaml
└── etcd.package.yaml
After running the migration, the following bundles are generated in the
bundle/
Example bundle format layout
bundle/
├── bundle-0.0.1
│ ├── bundle.Dockerfile
│ ├── manifests
│ │ ├── etcdcluster.crd.yaml
│ │ ├── etcdoperator.clusterserviceversion.yaml
│ ├── metadata
│ │ └── annotations.yaml
│ └── tests
│ └── scorecard
│ └── config.yaml
└── bundle-0.0.2
├── bundle.Dockerfile
├── manifests
│ ├── etcdbackup.crd.yaml
│ ├── etcdcluster.crd.yaml
│ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml
│ ├── etcdrestore.crd.yaml
├── metadata
│ └── annotations.yaml
└── tests
└── scorecard
└── config.yaml
Based on this generated layout, bundle images for both of the bundles are also built with the following names:
-
quay.io/example/etcd:0.0.1 -
quay.io/example/etcd:0.0.2
5.12.2. Migrating a package manifest project to bundle format Copiar enlaceEnlace copiado en el portapapeles!
Operator authors can use the Operator SDK to migrate a package manifest format Operator project to a bundle format project.
Prerequisites
- Operator SDK CLI installed
- Operator project initially generated using the Operator SDK in package manifest format
Procedure
Use the Operator SDK to migrate your package manifest project to the bundle format and generate bundle images:
$ operator-sdk pkgman-to-bundle <package_manifests_dir> \1 [--output-dir <directory>] \2 --image-tag-base <image_name_base>3 - 1
- Specify the location of the package manifests directory for the project, such as
packagemanifests/ormanifests/. - 2
- Optional: By default, the generated bundles are written locally to disk to the
bundle/directory. You can use the--output-dirflag to specify an alternative location. - 3
- Set the
--image-tag-baseflag to provide the base of the image name, such asquay.io/example/etcd, that will be used for the bundles. Provide the name without a tag, because the tag for the images will be set according to the bundle version. For example, the full bundle image names are generated in the format<image_name_base>:<bundle_version>.
Verification
Verify that the generated bundle image runs successfully:
$ operator-sdk run bundle <bundle_image_name>:<tag>Example output
INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup "operator-sdk-og" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion "default/etcdoperator.v0.9.4" to appear INFO[0048] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Pending INFO[0049] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Installing INFO[0064] Found ClusterServiceVersion "default/etcdoperator.v0.9.4" phase: Succeeded INFO[0065] OLM has successfully installed "etcdoperator.v0.9.4"
5.13. Operator SDK CLI reference Copiar enlaceEnlace copiado en el portapapeles!
The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier.
Operator SDK CLI syntax
$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
5.13.1. bundle Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk bundle
5.13.1.1. validate Copiar enlaceEnlace copiado en el portapapeles!
The
bundle validate
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are
|
|
| List all optional validators available. When set, no validators are run. |
|
| Label selector to select optional validators to run. When run with the
|
5.13.2. cleanup Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk cleanup
run
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Time to wait for the command to complete before failing. The default value is
|
5.13.3. completion Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk completion
| Subcommand | Description |
|---|---|
|
| Generate bash completions. |
|
| Generate zsh completions. |
| Flag | Description |
|---|---|
|
| Usage help output. |
For example:
$ operator-sdk completion bash
Example output
# bash completion for operator-sdk -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh
5.13.4. create Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk create
5.13.4.1. api Copiar enlaceEnlace copiado en el portapapeles!
The
create api
init
| Flag | Description |
|---|---|
|
| Help output for the
|
5.13.5. generate Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk generate
5.13.5.1. bundle Copiar enlaceEnlace copiado en el portapapeles!
The
generate bundle
bundle.Dockerfile
Typically, you run the
generate kustomize manifests
generate bundle
make bundle
| Flag | Description |
|---|---|
|
| Comma-separated list of channels to which the bundle belongs. The default value is
|
|
| Root directory for
|
|
| The default channel for the bundle. |
|
| Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the
|
|
| Help for
|
|
| Directory from which to read an existing bundle. This directory is the parent of your bundle
|
|
| Directory containing Kustomize bases and a
|
|
| Generate bundle manifests. |
|
| Generate bundle metadata and Dockerfile. |
|
| Directory to write the bundle to. |
|
| Overwrite the bundle metadata and Dockerfile if they exist. The default value is
|
|
| Package name for the bundle. |
|
| Run in quiet mode. |
|
| Write bundle manifest to standard out. |
|
| Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. |
5.13.5.2. kustomize Copiar enlaceEnlace copiado en el portapapeles!
The
generate kustomize
5.13.5.2.1. manifests Copiar enlaceEnlace copiado en el portapapeles!
The
generate kustomize manifests
kustomization.yaml
config/manifests
--interactive=false
| Flag | Description |
|---|---|
|
| Root directory for API type definitions. |
|
| Help for
|
|
| Directory containing existing Kustomize files. |
|
| When set to
|
|
| Directory where to write Kustomize files. |
|
| Package name. |
|
| Run in quiet mode. |
5.13.6. init Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk init
This command writes the following files:
- Boilerplate license file
-
file with the domain and repository
PROJECT -
to build the project
Makefile -
file with project dependencies
go.mod -
file for customizing manifests
kustomization.yaml - Patch file for customizing images for manager manifests
- Patch file for enabling Prometheus metrics
-
file to run
main.go
| Flag | Description |
|---|---|
|
| Help output for the
|
|
| Name and optionally version of the plugin to initialize the project with. Available plugins are
|
|
| Project version. Available values are
|
5.13.7. run Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk run
5.13.7.1. bundle Copiar enlaceEnlace copiado en el portapapeles!
The
run bundle
| Flag | Description |
|---|---|
|
| Index image in which to inject a bundle. The default image is
|
|
| Install mode supported by the cluster service version (CSV) of the Operator, for example
|
|
| Install timeout. The default value is
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Help output for the
|
5.13.7.2. bundle-upgrade Copiar enlaceEnlace copiado en el portapapeles!
The
run bundle-upgrade
| Flag | Description |
|---|---|
|
| Upgrade timeout. The default value is
|
|
| Path to the
|
|
| If present, namespace in which to run the CLI request. |
|
| Help output for the
|
5.13.8. scorecard Copiar enlaceEnlace copiado en el portapapeles!
The
operator-sdk scorecard
| Flag | Description |
|---|---|
|
| Path to scorecard configuration file. The default path is
|
|
| Help output for the
|
|
| Path to
|
|
| List which tests are available to run. |
|
| Namespace in which to run the test images. |
|
| Output format for results. Available values are
|
|
| Label selector to determine which tests are run. |
|
| Service account to use for tests. The default value is
|
|
| Disable resource cleanup after tests are run. |
|
| Seconds to wait for tests to complete, for example
|