This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Developing Operators
5.1. About the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator.
Why use the Operator SDK?
The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring.
The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster
- Extensions to cover common Operator use cases
- Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0 or later.
5.1.1. What are Operators? Copia collegamentoCollegamento copiato negli appunti!
For an overview about basic Operator concepts and terminology, see Understanding Operators.
5.1.2. Development workflow Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK provides the following workflow to develop a new Operator:
- Create an Operator project by using the Operator SDK command-line interface (CLI).
- Define new resource APIs by adding custom resource definitions (CRDs).
- Specify resources to watch by using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 5.1. Operator SDK workflow
At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
5.2. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators.
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0.
5.2.1. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
You can install the OpenShift SDK CLI tool on Linux.
Prerequisites
- Go v1.16+
-
dockerv17.03+,podmanv1.9.3+, orbuildahv1.7+
Procedure
- Navigate to the OpenShift mirror site.
-
From the
4.8.4directory, download the latest version of the tarball for Linux. Unpack the archive:
tar xvf operator-sdk-v1.8.0-ocp-linux-x86_64.tar.gz
$ tar xvf operator-sdk-v1.8.0-ocp-linux-x86_64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the file executable:
chmod +x operator-sdk
$ chmod +x operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the extracted
operator-sdkbinary to a directory that is on yourPATH.TipTo check your
PATH:echo $PATH
$ echo $PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow sudo mv ./operator-sdk /usr/local/bin/operator-sdk
$ sudo mv ./operator-sdk /usr/local/bin/operator-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After you install the Operator SDK CLI, verify that it is available:
operator-sdk version
$ operator-sdk versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
operator-sdk version: "v1.8.0-ocp", ...
operator-sdk version: "v1.8.0-ocp", ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Upgrading projects for newer Operator SDK versions Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0. If you already have the v1.3.0 CLI installed on your workstation, you can upgrade the CLI to v1.8.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK v1.8.0, upgrade steps are required for the associated breaking changes introduced since v1.3.0. You must perform the upgrade steps manually in any of your Operator projects that were previously created or maintained with v1.3.0.
5.3.1. Upgrading projects for Operator SDK v1.8.0 Copia collegamentoCollegamento copiato negli appunti!
The following upgrade steps must be performed to upgrade an existing Operator project for compatibility with v1.8.0.
Prerequisites
- Operator SDK v1.8.0 installed
- Operator project that was previously created or maintained with Operator SDK v1.3.0
Procedure
Make the following changes to your
PROJECTfile:Update the
PROJECTfilepluginsobject to usemanifestsandscorecardobjects.The
manifestsandscorecardplug-ins that create Operator Lifecycle Manager (OLM) and scorecard manifests now have plug-in objects for runningcreatesubcommands to create related files.For Go-based Operator projects, an existing Go-based plug-in configuration object is already present. While the old configuration is still supported, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
Old configuration
version: 3-alpha ... plugins: go.sdk.operatorframework.io/v2-alpha: {}version: 3-alpha ... plugins: go.sdk.operatorframework.io/v2-alpha: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow New configuration
version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: For Ansible- and Helm-based Operator projects, the plug-in configuration object previously did not exist. While you are not required to add the plug-in configuration objects, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}version: 3-alpha ... plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The
PROJECTconfig version3-alphamust be upgraded to3. Theversionkey in yourPROJECTfile represents thePROJECTconfig version:Old
PROJECTfileversion: 3-alpha resources: - crdVersion: v1 ...
version: 3-alpha resources: - crdVersion: v1 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Version
3-alphahas been stabilized as version 3 and contains a set of config fields sufficient to fully describe a project. While this change is not technically breaking because the spec at that version was alpha, it was used by default inoperator-sdkcommands, so it should be marked as breaking and have a convenient upgrade path.Run the
alpha config-3alpha-to-3command to convert most of yourPROJECTfile from version3-alphato3:operator-sdk alpha config-3alpha-to-3
$ operator-sdk alpha config-3alpha-to-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Your PROJECT config file has been converted from version 3-alpha to 3. Please make sure all config data is correct.
Your PROJECT config file has been converted from version 3-alpha to 3. Please make sure all config data is correct.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command will also output comments with directions where automatic conversion is not possible.
Verify the change:
New
PROJECTfileversion: "3" resources: - api: crdVersion: v1 ...
version: "3" resources: - api: crdVersion: v1 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Make the following changes to your
config/manager/manager.yamlfile:For Ansible- and Helm-based Operator projects, add liveness and readiness probes.
New projects built with the Operator SDK have the probes configured by default. The endpoints
/healthzand/readyzare available now in the provided image base. You can update your existing projects to use the probes by updating theDockerfileto use the latest base image, then add the following to themanagercontainer in theconfig/manager/manager.yamlfile:Example 5.1. Configuration for Ansible-based Operator projects
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.2. Configuration for Helm-based Operator projects
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Ansible- and Helm-based Operator projects, add security contexts to your manager’s deployment.
In the
config/manager/manager.yamlfile, add the following security contexts:Example 5.3.
config/manager/manager.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Make the following changes to your
Makefile:For Ansible- and Helm-based Operator projects, update the
helm-operatorandansible-operatorURLs in theMakefile:For Ansible-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)Copy to Clipboard Copied! Toggle word wrap Toggle overflow to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator_$(OS)_$(ARCH)
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator_$(OS)_$(ARCH)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Helm-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator-v1.3.0-$(ARCHOPER)-$(OSOPER)Copy to Clipboard Copied! Toggle word wrap Toggle overflow to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator_$(OS)_$(ARCH)
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator_$(OS)_$(ARCH)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For Ansible- and Helm-based Operator projects, update the
helm-operator,ansible-operator, andkustomizerules in theMakefile. These rules download a local binary but do not use it if a global binary is present:Example 5.4.
Makefilediff for Ansible-based Operator projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.5.
Makefilediff for Helm-based Operator projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the positional directory argument
.in themaketarget fordocker-build.The directory argument
.in thedocker-buildtarget was moved to the last positional argument to align withpodmanCLI expectations, which makes substitution cleaner:Old target
docker-build: docker build . -t ${IMG}docker-build: docker build . -t ${IMG}Copy to Clipboard Copied! Toggle word wrap Toggle overflow New target
docker-build: docker build -t ${IMG} .docker-build: docker build -t ${IMG} .Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can make this change by running the following command:
sed -i 's/docker build . -t ${IMG}/docker build -t ${IMG} ./' $(git grep -l 'docker.*build \. ')$ sed -i 's/docker build . -t ${IMG}/docker build -t ${IMG} ./' $(git grep -l 'docker.*build \. ')Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Ansible- and Helm-based Operator projects, add a
helptarget to theMakefile.Ansible- and Helm-based projects now provide
helptarget in theMakefileby default, similar to a--helpflag. You can manually add this target to yourMakefileusing the following lines:Example 5.6.
helptargetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add
opmandcatalog-buildtargets. You can use these targets to create your own catalogs for your Operator or add your Operator bundles to an existing catalog:Add the targets to your
Makefileby adding the following lines:Example 5.7.
opmandcatalog-buildtargetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are updating a Go-based Operator project, also add the following
Makefilevariables:Example 5.8.
MakefilevariablesOS = $(shell go env GOOS) ARCH = $(shell go env GOARCH)
OS = $(shell go env GOOS) ARCH = $(shell go env GOARCH)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For Go-based Operator projects, set the
SHELLvariable in yourMakefileto the systembashbinary.Importing the
setup-envtest.shscript requiresbash, so theSHELLvariable must be set tobashwith error options:Example 5.9.
MakefilediffCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For Go-based Operator projects, upgrade
controller-runtimeto v0.8.3 and Kubernetes dependencies to v0.20.2 by changing the following entries in yourgo.modfile, then rebuild your project:Example 5.10.
go.modfile... k8s.io/api v0.20.2 k8s.io/apimachinery v0.20.2 k8s.io/client-go v0.20.2 sigs.k8s.io/controller-runtime v0.8.3
... k8s.io/api v0.20.2 k8s.io/apimachinery v0.20.2 k8s.io/client-go v0.20.2 sigs.k8s.io/controller-runtime v0.8.3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
system:controller-managerservice account to your project. A non-default service accountcontroller-manageris now generated by theoperator-sdk initcommand to improve security for Operators installed in shared namespaces. To add this service account to your existing project, follow these steps:Create the
ServiceAccountdefinition in a file:Example 5.11.
config/rbac/service_account.yamlfileapiVersion: v1 kind: ServiceAccount metadata: name: controller-manager namespace: system
apiVersion: v1 kind: ServiceAccount metadata: name: controller-manager namespace: systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the service account to the list of RBAC resources:
echo "- service_account.yaml" >> config/rbac/kustomization.yaml
$ echo "- service_account.yaml" >> config/rbac/kustomization.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all
RoleBindingandClusterRoleBindingobjects that reference the Operator’s service account:find config/rbac -name *_binding.yaml -exec sed -i -E 's/ name: default/ name: controller-manager/g' {} \;$ find config/rbac -name *_binding.yaml -exec sed -i -E 's/ name: default/ name: controller-manager/g' {} \;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the service account name to the manager deployment’s
spec.template.spec.serviceAccountNamefield:sed -i -E 's/([ ]+)(terminationGracePeriodSeconds:)/\1serviceAccountName: controller-manager\n\1\2/g' config/manager/manager.yaml
$ sed -i -E 's/([ ]+)(terminationGracePeriodSeconds:)/\1serviceAccountName: controller-manager\n\1\2/g' config/manager/manager.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the changes look like the following diffs:
Example 5.12.
config/manager/manager.yamlfile diffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.13.
config/rbac/auth_proxy_role_binding.yamlfile diffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.14.
config/rbac/kustomization.yamlfile diffresources: +- service_account.yaml - role.yaml - role_binding.yaml - leader_election_role.yaml
resources: +- service_account.yaml - role.yaml - role_binding.yaml - leader_election_role.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.15.
config/rbac/leader_election_role_binding.yamlfile diffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.16.
config/rbac/role_binding.yamlfile diffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.17.
config/rbac/service_account.yamlfile diff+apiVersion: v1 +kind: ServiceAccount +metadata: + name: controller-manager + namespace: system
+apiVersion: v1 +kind: ServiceAccount +metadata: + name: controller-manager + namespace: systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Make the following changes to your
config/manifests/kustomization.yamlfile:Add a Kustomize patch to remove the cert-manager
volumeandvolumeMountobjects from your cluster service version (CSV).Because Operator Lifecycle Manager (OLM) does not yet support cert-manager, a JSON patch was added to remove this volume and mount so OLM can create and manage certificates for your Operator.
In the
config/manifests/kustomization.yamlfile, add the following lines:Example 5.18.
config/manifests/kustomization.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: For Ansible- and Helm-based Operator projects, configure
ansible-operatorandhelm-operatorwith a component config. To add this option, follow these steps:Create the following file:
Example 5.19.
config/default/manager_config_patch.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following file:
Example 5.20.
config/manager/controller_manager_config.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
config/default/kustomization.yamlfile by applying the following changes toresources:Example 5.21.
config/default/kustomization.yamlfileresources: ... - manager_config_patch.yaml
resources: ... - manager_config_patch.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
config/manager/kustomization.yamlfile by applying the following changes:Example 5.22.
config/manager/kustomization.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Add a manager config patch to the
config/default/kustomization.yamlfile.The generated
--configflag was not added to either theansible-operatororhelm-operatorbinary when config file support was originally added, so it does not currently work. The--configflag supports configuration of both binaries by file; this method of configuration only applies to the underlying controller manager and not the Operator as a whole.To optionally configure the Operator’s deployment with a config file, make changes to the
config/default/kustomization.yamlfile as shown in the following diff:Example 5.23.
config/default/kustomization.yamlfile diffIf you want your controller-manager to expose the /metrics # endpoint w/o any authn/z, please comment the following line.
# If you want your controller-manager to expose the /metrics # endpoint w/o any authn/z, please comment the following line. \- manager_auth_proxy_patch.yaml +# Mount the controller config file for loading manager configurations +# through a ComponentConfig type +- manager_config_patch.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Flags can be used as is or to override config file values.
For Ansible- and Helm-based Operator projects, add role rules for leader election by making the following changes to the
config/rbac/leader_election_role.yamlfile:Example 5.24.
config/rbac/leader_election_role.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Ansible-based Operator projects, update Ansible collections.
In your
requirements.ymlfile, change theversionfield forcommunity.kubernetesto1.2.1, and theversionfield foroperator_sdk.utilto0.2.0.Make the following changes to your
config/default/manager_auth_proxy_patch.yamlfile:For Ansible-based Operator projects, add the
--health-probe-bind-address=:6789argument to theconfig/default/manager_auth_proxy_patch.yamlfile:Example 5.25.
config/default/manager_auth_proxy_patch.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Helm-based Operator projects:
Add the
--health-probe-bind-address=:8081argument to theconfig/default/manager_auth_proxy_patch.yamlfile:Example 5.26.
config/default/manager_auth_proxy_patch.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace the deprecated flag
--enable-leader-electionwith--leader-elect, and the deprecated flag--metrics-addrwith--metrics-bind-address.
Make the following changes to your
config/prometheus/monitor.yamlfile:Add scheme, token, and TLS config to the Prometheus
ServiceMonitormetrics endpoint.The
/metricsendpoint, while specifying thehttpsport on the manager pod, was not actually configured to serve over HTTPS because notlsConfigwas set. Becausekube-rbac-proxysecures this endpoint as a manager sidecar, using the service account token mounted into the pod by default corrects this problem.Apply the changes to the
config/prometheus/monitor.yamlfile as shown in the following diff:Example 5.27.
config/prometheus/monitor.yamlfile diffCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you removed
kube-rbac-proxyfrom your project, ensure that you secure the/metricsendpoint using a proper TLS configuration.
Ensure that existing dependent resources have owner annotations.
For Ansible-based Operator projects, owner reference annotations on cluster-scoped dependent resources and dependent resources in other namespaces were not applied correctly. A workaround was to add these annotations manually, which is no longer required as this bug has been fixed.
Deprecate support for package manifests.
The Operator Framework is removing support for the Operator package manifest format in a future release. As part of the ongoing deprecation process, the
operator-sdk generate packagemanifestsandoperator-sdk run packagemanifestscommands are now deprecated. To migrate package manifests to bundles, theoperator-sdk pkgman-to-bundlecommand can be used.Run the
operator-sdk pkgman-to-bundle --helpcommand and see "Migrating package manifest projects to bundle format" for more details.Update the finalizer names for your Operator.
The finalizer name format suggested by Kubernetes documentation is:
<qualified_group>/<finalizer_name>
<qualified_group>/<finalizer_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow while the format previously documented for Operator SDK was:
<finalizer_name>.<qualified_group>
<finalizer_name>.<qualified_group>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your Operator uses any finalizers with names that match the incorrect format, change them to match the official format. For example,
finalizer.cache.example.commust be changed tocache.example.com/finalizer.
Your Operator project is now compatible with Operator SDK v1.8.0.
5.4. Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.4.1. Getting started with Operator SDK for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.4.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.1.2. Creating and deploying Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir memcached-operator
$ mkdir memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd memcached-operator
$ cd memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand to initialize the project:operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command uses the Go plugin by default.
Create an API.
Create a simple Memcached API:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator.
5.4.2. Operator SDK tutorial for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators.
5.4.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.4.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/memcached-operator
$ mkdir -p $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/memcached-operator
$ cd $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Activate support for Go modules:
export GO111MODULE=on
$ export GO111MODULE=onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand to initialize the project:operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator$ operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
operator-sdk initcommand uses the Go plugin by default.The
operator-sdk initcommand generates ago.modfile to be used with Go modules. The--repoflag is required when creating a project outside of$GOPATH/src/, because generated files require a valid module path.
5.4.2.2.1. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example:
5.4.2.2.2. About the Manager Copia collegamentoCollegamento copiato negli appunti!
The main program for the Operator is the main.go file, which initializes and runs the Manager. The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks.
The Manager can restrict the namespace that all controllers watch for resources:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces:
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
var namespaces []string
mgr, err := ctrl.NewManager(cfg, manager.Options{
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})
5.4.2.2.3. About multi-group APIs Copia collegamentoCollegamento copiato negli appunti!
Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command:
operator-sdk edit --multigroup=true
$ operator-sdk edit --multigroup=true
This command updates the PROJECT file, which should look like the following example:
domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ...
domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...
For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly.
Additional resource
- For more details on migrating to a multi-group project, see the Kubebuilder documentation.
5.4.2.3. Creating an API and controller Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.
Procedure
Run the following command to create an API with group
cache, version,v1, and kindMemcached:operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached$ operator-sdk create api \ --group=cache \ --version=v1 \ --kind=MemcachedCopy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter
yfor creating both the resource and controller:Create Resource [y/n] y Create Controller [y/n] y
Create Resource [y/n] y Create Controller [y/n] yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...
Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go.
5.4.2.3.1. Defining the API Copia collegamentoCollegamento copiato negli appunti!
Define the API for the Memcached custom resource (CR).
Procedure
Modify the Go type definitions at
api/v1/memcached_types.goto have the followingspecandstatus:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the generated code for the resource type:
make generate
$ make generateCopy to Clipboard Copied! Toggle word wrap Toggle overflow TipAfter you modify a
*_types.gofile, you must run themake generatecommand to update the generated code for that resource type.The above Makefile target invokes the
controller-genutility to update theapi/v1/zz_generated.deepcopy.gofile. This ensures your API Go type definitions implement theruntime.Objectinterface that all Kind types must implement.
5.4.2.3.2. Generating CRD manifests Copia collegamentoCollegamento copiato negli appunti!
After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests.
Procedure
Run the following command to generate and update CRD manifests:
make manifests
$ make manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow This Makefile target invokes the
controller-genutility to generate the CRD manifests in theconfig/crd/bases/cache.example.com_memcacheds.yamlfile.
5.4.2.3.2.1. About OpenAPI validation Copia collegamentoCollegamento copiato negli appunti!
OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated.
Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix.
5.4.2.4. Implementing the controller Copia collegamentoCollegamento copiato negli appunti!
After creating a new API and controller, you can implement the controller logic.
Procedure
For this example, replace the generated controller file
controllers/memcached_controller.gowith following example implementation:Example 5.28. Example
memcached_controller.goCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example controller runs the following reconciliation logic for each
Memcachedcustom resource (CR):- Create a Memcached deployment if it does not exist.
-
Ensure that the deployment size is the same as specified by the
MemcachedCR spec. -
Update the
MemcachedCR status with the names of thememcachedpods.
The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to Running the Operator.
5.4.2.4.1. Resources watched by the controller Copia collegamentoCollegamento copiato negli appunti!
The SetupWithManager() function in controllers/memcached_controller.go specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller.
NewControllerManagedBy() provides a controller builder that allows various controller configurations.
For(&cachev1.Memcached{}) specifies the Memcached type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached type, the reconcile loop is sent a reconcile Request argument, which consists of a namespace and name key, for that Memcached object.
Owns(&appsv1.Deployment{}) specifies the Deployment type as the secondary resource to watch. For each Deployment type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached object for which the deployment was created.
5.4.2.4.2. Controller configurations Copia collegamentoCollegamento copiato negli appunti!
You can initialize a controller by using many other useful configurations. For example:
Set the maximum number of concurrent reconciles for the controller by using the
MaxConcurrentReconcilesoption, which defaults to1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Filter watch events using predicates.
-
Choose the type of EventHandler to change how a watch event translates to reconcile requests for the reconcile loop. For Operator relationships that are more complex than primary and secondary resources, you can use the
EnqueueRequestsFromMapFunchandler to transform a watch event into an arbitrary set of reconcile requests.
For more details on these and other configurations, see the upstream Builder and Controller GoDocs.
5.4.2.4.3. Reconcile loop Copia collegamentoCollegamento copiato negli appunti!
Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument, which is a namespace and name key used to find the primary resource object, Memcached, from the cache:
Based on the return values, result, and error, the request might be requeued and the reconcile loop might be triggered again:
You can set the Result.RequeueAfter to requeue the request after a grace period as well:
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return Result with RequeueAfter set to periodically reconcile a CR.
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
5.4.2.4.4. Permissions and RBAC manifests Copia collegamentoCollegamento copiato negli appunti!
The controller requires certain RBAC permissions to interact with the resources it manages. These are specified using RBAC markers, such as the following:
The ClusterRole object manifest at config/rbac/role.yaml is generated from the previous markers by using the controller-gen utility whenever the make manifests command is run.
5.4.2.5. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
Before running your Go-based Operator as either a deployment on OpenShift Container Platform or as a bundle that uses OLM, ensure that your project has been updated to use supported images.
5.4.2.5.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.5.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Prerequisites
- Prepared your Go-based Operator to run on OpenShift Container Platform by updating the project to use supported images
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.4.2.5.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.8+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.8) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ [-n <namespace>] \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required objects, including RBAC.
5.4.2.6. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
MemcachedCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project memcached-operator-system
$ oc project memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
MemcachedCR manifest atconfig/samples/cache_v1_memcached.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/cache_v1_memcached.yaml
$ oc apply -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
MemcachedOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get memcached/memcached-sample -o yaml
$ oc get memcached/memcached-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/cache_v1_memcached.yamlfile to change thespec.sizefield in theMemcachedCR from3to5:oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.3. Project layout for Go-based Operators Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.4.3.1. Go-based project layout Copia collegamentoCollegamento copiato negli appunti!
Go-based Operator projects, the default type, generated using the operator-sdk init command contain the following files and directories:
| File or directory | Purpose |
|---|---|
|
|
Main program of the Operator. This instantiates a new manager that registers all custom resource definitions (CRDs) in the |
|
|
Directory tree that defines the APIs of the CRDs. You must edit the |
|
|
Controller implementations. Edit the |
|
| Kubernetes manifests used to deploy your controller on a cluster, including CRDs, RBAC, and certificates. |
|
| Targets used to build and deploy your controller. |
|
| Instructions used by a container engine to build your Operator. |
|
| Kubernetes manifests for registering CRDs, setting up RBAC, and deploying the Operator as a deployment. |
5.5. Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.5.1. Getting started with Operator SDK for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK includes options for generating an Operator project that leverages existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Ansible-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Ansible-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster.
5.5.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed - Ansible version v2.9.0
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- OpenShift Python client version v0.11.2+
-
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.1.2. Creating and deploying Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Ansible-based Operator for Memcached by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir memcached-operator
$ mkdir memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd memcached-operator
$ cd memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with theansibleplugin to initialize the project:operator-sdk init \ --plugins=ansible \ --domain=example.com$ operator-sdk init \ --plugins=ansible \ --domain=example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an API.
Create a simple Memcached API:
operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Generates an Ansible role for the API.
Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system$ oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system$ oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Ansible-based Operators for a more in-depth walkthrough on building an Ansible-based Operator.
5.5.2. Operator SDK tutorial for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Ansible support in the Operator SDK to build an example Ansible-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This tutorial walks through the following process:
- Create a Memcached deployment
-
Ensure that the deployment size is the same as specified by the
Memcachedcustom resource (CR) spec -
Update the
MemcachedCR status using the status writer with the names of thememcachedpods
This process is accomplished by using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Ansible-based Operators.
5.5.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed - Ansible version v2.9.0
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- OpenShift Python client version v0.11.2+
-
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.5.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called memcached-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/memcached-operator
$ mkdir -p $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/memcached-operator
$ cd $HOME/projects/memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with theansibleplugin to initialize the project:operator-sdk init \ --plugins=ansible \ --domain=example.com$ operator-sdk init \ --plugins=ansible \ --domain=example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.2.1. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Ansible. For example:
domain: example.com layout: ansible.sdk.operatorframework.io/v1 projectName: memcached-operator version: 3
domain: example.com
layout: ansible.sdk.operatorframework.io/v1
projectName: memcached-operator
version: 3
5.5.2.3. Creating an API Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a Memcached API.
Procedure
Run the following command to create an API with group
cache, version,v1, and kindMemcached:operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role$ operator-sdk create api \ --group cache \ --version v1 \ --kind Memcached \ --generate-role1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Generates an Ansible role for the API.
After creating the API, your Operator project updates with the following structure:
- Memcached CRD
-
Includes a sample
Memcachedresource - Manager
Program that reconciles the state of the cluster to the desired state by using:
- A reconciler, either an Ansible role or playbook
-
A
watches.yamlfile, which connects theMemcachedresource to thememcachedAnsible role
5.5.2.4. Modifying the manager Copia collegamentoCollegamento copiato negli appunti!
Update your Operator project to provide the reconcile logic, in the form of an Ansible role, which runs every time a Memcached resource is created, updated, or deleted.
Procedure
Update the
roles/memcached/tasks/main.ymlfile with the following structure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This
memcachedrole ensures amemcacheddeployment exist and sets the deployment size.Set default values for variables used in your Ansible role by editing the
roles/memcached/defaults/main.ymlfile:--- # defaults file for Memcached size: 1
--- # defaults file for Memcached size: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
Memcachedsample resource in theconfig/samples/cache_v1_memcached.yamlfile with the following structure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The key-value pairs in the custom resource (CR) spec are passed to Ansible as extra variables.
The names of all variables in the spec field are converted to snake case, meaning lowercase with an underscore, by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.
You can disable this case conversion by setting the snakeCaseParameters option to false in your watches.yaml file. It is recommended that you perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.
5.5.2.5. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.5.2.5.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.5.2.5.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.8+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2.5.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.8) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ [-n <namespace>] \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required objects, including RBAC.
5.5.2.6. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Memcached Operator, which provides the
MemcachedCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project memcached-operator-system
$ oc project memcached-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
MemcachedCR manifest atconfig/samples/cache_v1_memcached.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/cache_v1_memcached.yaml
$ oc apply -f config/samples/cache_v1_memcached.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
MemcachedOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get memcached/memcached-sample -o yaml
$ oc get memcached/memcached-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/cache_v1_memcached.yamlfile to change thespec.sizefield in theMemcachedCR from3to5:oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=merge$ oc patch memcached memcached-sample \ -p '{"spec":{"size": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.3. Project layout for Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.5.3.1. Ansible-based project layout Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operator projects generated using the operator-sdk init --plugins ansible command contain the following directories and files:
| File or directory | Purpose |
|---|---|
|
| Dockerfile for building the container image for the Operator. |
|
| Targets for building, publishing, deploying the container image that wraps the Operator binary, and targets for installing and uninstalling the custom resource definition (CRD). |
|
| YAML file containing metadata information for the Operator. |
|
|
Base CRD files and the |
|
|
Collects all Operator manifests for deployment. Use by the |
|
| Controller manager deployment. |
|
|
|
|
| Role and role binding for leader election and authentication proxy. |
|
| Sample resources created for the CRDs. |
|
| Sample configurations for testing. |
|
| A subdirectory for the playbooks to run. |
|
| Subdirectory for the roles tree to run. |
|
|
Group/version/kind (GVK) of the resources to watch, and the Ansible invocation method. New entries are added by using the |
|
| YAML file containing the Ansible collections and role dependencies to install during a build. |
|
| Molecule scenarios for end-to-end testing of your role and Operator. |
5.5.4. Ansible support in Operator SDK Copia collegamentoCollegamento copiato negli appunti!
5.5.4.1. Custom resource files Copia collegamentoCollegamento copiato negli appunti!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the |
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
5.5.4.2. watches.yaml file Copia collegamentoCollegamento copiato negli appunti!
A group/version/kind (GVK) is a unique identifier for a Kubernetes API. The watches.yaml file contains a list of mappings from custom resources (CRs), identified by its GVK, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location at /opt/ansible/watches.yaml.
| Field | Description |
|---|---|
|
| Group of CR to watch. |
|
| Version of CR to watch. |
|
| Kind of CR to watch |
|
|
Path to the Ansible role added to the container. For example, if your |
|
|
Path to the Ansible playbook added to the container. This playbook is expected to be a way to call roles. This field is mutually exclusive with the |
|
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
|
When set to |
Example watches.yaml file
5.5.4.2.1. Advanced options Copia collegamentoCollegamento copiato negli appunti!
Advanced features can be enabled by adding them to your watches.yaml file per GVK. They can go below the group, version, kind and playbook or role fields.
Some features can be overridden per resource using an annotation on that CR. The options that can be overridden have the annotation specified below.
| Feature | YAML key | Description | Annotation for override | Default value |
|---|---|---|---|---|
| Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
| Manage status |
|
Allows the Operator to manage the |
| |
| Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
| Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
| Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example watches.yml file with advanced options
5.5.4.3. Extra variables sent to Ansible Copia collegamentoCollegamento copiato negli appunti!
Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the custom resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.
The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.
For the following CR example:
The structure passed to Ansible as extra variables is:
The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
---
- debug:
msg: "name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}"
5.5.4.4. Ansible Runner directory Copia collegamentoCollegamento copiato negli appunti!
Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.
5.5.5. Kubernetes Collection for Ansible Copia collegamentoCollegamento copiato negli appunti!
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
5.5.5.1. Installing the Kubernetes Collection for Ansible Copia collegamentoCollegamento copiato negli appunti!
You can install the Kubernetes Collection for Ansible on your local workstation.
Procedure
Install Ansible 2.9+:
sudo dnf install ansible
$ sudo dnf install ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the OpenShift python client package:
pip3 install openshift
$ pip3 install openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
ansible-galaxy collection install community.kubernetes
$ ansible-galaxy collection install community.kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have already initialized your Operator, you might have a
requirements.ymlfile at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs thecommunity.kubernetescollection as well as theoperator_sdk.utilcollection, which provides modules and plugins for Operator-specific fuctions.To install the dependent modules from the
requirements.ymlfile:ansible-galaxy collection install -r requirements.yml
$ ansible-galaxy collection install -r requirements.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.5.2. Testing the Kubernetes Collection locally Copia collegamentoCollegamento copiato negli appunti!
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Prerequisites
- Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
- Install the Kubernetes Collection for Ansible
Procedure
In your Ansible-based Operator project directory, modify the
roles/<kind>/tasks/main.ymlfile with the Ansible logic that you want. Theroles/<kind>/directory is created when you use the--generate-roleflag while creating an API. The<kind>replaceable matches the kind that you specified for the API.The following example creates and deletes a config map based on the value of a variable named
state:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
roles/<kind>/defaults/main.ymlfile to setstatetopresentby default:--- state: present
--- state: presentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook by creating a
playbook.ymlfile in the top-level of your project directory, and include your<kind>role:--- - hosts: localhost roles: - <kind>--- - hosts: localhost roles: - <kind>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook playbook.yml
$ ansible-playbook playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the config map was created:
oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DATA AGE example-config 0 2m1s
NAME DATA AGE example-config 0 2m1sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Rerun the playbook setting
statetoabsent:ansible-playbook playbook.yml --extra-vars state=absent
$ ansible-playbook playbook.yml --extra-vars state=absentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the config map was deleted:
oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.5.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.
5.5.6. Using Ansible inside an Operator Copia collegamentoCollegamento copiato negli appunti!
After you are familiar with using the Kubernetes Collection for Ansible locally, you can trigger the same Ansible logic inside of an Operator when a custom resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the watches.yaml file.
5.5.6.1. Custom resource files Copia collegamentoCollegamento copiato negli appunti!
Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
| Field | Description |
|---|---|
|
| Version of the CR to be created. |
|
| Kind of the CR to be created. |
|
| Kubernetes-specific metadata to be created. |
|
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
|
Summarizes the current state of the object. For Ansible-based Operators, the |
|
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
| Annotation | Description |
|---|---|
|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
5.5.6.2. Testing an Ansible-based Operator locally Copia collegamentoCollegamento copiato negli appunti!
You can test the logic inside of an Ansible-based Operator running locally by using the make run command from the top-level directory of your Operator project. The make run Makefile target runs the ansible-operator binary locally, which reads from the watches.yaml file and uses your ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s modules do.
You can customize the roles path by setting the environment variable ANSIBLE_ROLES_PATH or by using the ansible-roles-path flag. If the role is not found in the ANSIBLE_ROLES_PATH value, the Operator looks for it in {{current directory}}/roles.
Prerequisites
- Ansible Runner version v1.1.0+
- Ansible Runner HTTP Event Emitter plugin version v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
Procedure
Install your custom resource definition (CRD) and proper role-based access control (RBAC) definitions for your custom resource (CR):
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
make runcommand:make run
$ make runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the Operator now watching your CR for events, the creation of a CR will trigger your Ansible role to run.
NoteConsider an example
config/samples/<gvk>.yamlCR manifest:apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"
apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: "<kind>-sample"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the
specfield is not set, Ansible is invoked with no extra variables. Passing extra variables from a CR to Ansible is covered in another section. It is important to set reasonable defaults for the Operator.Create an instance of your CR with the default variable
stateset topresent:oc apply -f config/samples/<gvk>.yaml
$ oc apply -f config/samples/<gvk>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
example-configconfig map was created:oc get configmaps
$ oc get configmapsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS AGE example-config Active 3s
NAME STATUS AGE example-config Active 3sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify your
config/samples/<gvk>.yamlfile to set thestatefield toabsent. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes:
oc apply -f config/samples/<gvk>.yaml
$ oc apply -f config/samples/<gvk>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the config map is deleted:
oc get configmap
$ oc get configmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.3. Testing an Ansible-based Operator on the cluster Copia collegamentoCollegamento copiato negli appunti!
After you have tested your custom Ansible logic locally inside of an Operator, you can test the Operator inside of a pod on an OpenShift Container Platform cluster, which is prefered for production use.
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.4. Ansible logs Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operators provide logs about the Ansible run, which can be useful for debugging your Ansible tasks. The logs can also contain detailed information about the internals of the Operator and its interactions with Kubernetes.
5.5.6.4.1. Viewing Ansible logs Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Ansible-based Operator running as a deployment on a cluster
Procedure
To view logs from an Ansible-based Operator, run the following command:
oc logs deployment/<project_name>-controller-manager \ -c manager \ -n <namespace>$ oc logs deployment/<project_name>-controller-manager \ -c manager \1 -n <namespace>2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.4.2. Enabling full Ansible results in logs Copia collegamentoCollegamento copiato negli appunti!
You can set the environment variable ANSIBLE_DEBUG_LOGS to True to enable checking the full Ansible result in logs, which can be helpful when debugging.
Procedure
Edit the
config/manager/manager.yamlandconfig/default/manager_auth_proxy_patch.yamlfiles to include the following configuration:containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: "True"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.6.4.3. Enabling verbose debugging in logs Copia collegamentoCollegamento copiato negli appunti!
While developing an Ansible-based Operator, it can be helpful to enable additional debugging in logs.
Procedure
Add the
ansible.sdk.operatorframework.io/verbosityannotation to your custom resource to enable the verbosity level that you want. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.7. Custom resource status management Copia collegamentoCollegamento copiato negli appunti!
5.5.7.1. About custom resource status in Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operators automatically update custom resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
5.5.7.2. Tracking custom resource status manually Copia collegamentoCollegamento copiato negli appunti!
You can use the operator_sdk.util collection to modify your Ansible-based Operator to track custom resource (CR) status manually from your application.
Prerequisites
- Ansible-based Operator project created by using the Operator SDK
Procedure
Update the
watches.yamlfile with amanageStatusfield set tofalse:- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false
- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
operator_sdk.util.k8s_statusAnsible module to update the subresource. For example, to update with keytestand valuedata,operator_sdk.utilcan be used as shown:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can declare collections in the
meta/main.ymlfile for the role, which is included for scaffolded Ansible-based Operators:collections: - operator_sdk.util
collections: - operator_sdk.utilCopy to Clipboard Copied! Toggle word wrap Toggle overflow After declaring collections in the role meta, you can invoke the
k8s_statusmodule directly:k8s_status: ... status: key1: value1k8s_status: ... status: key1: value1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
5.6.1. Getting started with Operator SDK for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK includes options for generating an Operator project that leverages existing Helm charts to deploy Kubernetes resources as a unified application, without having to write any Go code.
To demonstrate the basics of setting up and running an Helm-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Helm-based Operator for Nginx and deploy it to a cluster.
5.6.1.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.1.2. Creating and deploying Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
You can build and deploy a simple Helm-based Operator for Nginx by using the Operator SDK.
Procedure
Create a project.
Create your project directory:
mkdir nginx-operator
$ mkdir nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change into the project directory:
cd nginx-operator
$ cd nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thehelmplugin to initialize the project:operator-sdk init \ --plugins=helm$ operator-sdk init \ --plugins=helmCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an API.
Create a simple Nginx API:
operator-sdk create api \ --group demo \ --version v1 \ --kind Nginx$ operator-sdk create api \ --group demo \ --version v1 \ --kind NginxCopy to Clipboard Copied! Toggle word wrap Toggle overflow This API uses the built-in Helm chart boilerplate from the
helm createcommand.Build and push the Operator image.
Use the default
Makefiletargets to build and push your Operator. SetIMGwith a pull spec for your image that uses a registry you can push to:make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Operator.
Install the CRD:
make install
$ make installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the project to the cluster. Set
IMGto the image that you pushed:make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a security context constraint (SCC).
The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following SCC to the service account for the
nginx-samplepod:oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a sample custom resource (CR).
Create a sample CR:
oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-system$ oc apply -f config/samples/demo_v1_nginx.yaml \ -n nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Watch for the CR to reconcile the Operator:
oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-system$ oc logs deployment.apps/nginx-operator-controller-manager \ -c manager \ -n nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up.
Run the following command to clean up the resources that have been created as part of this procedure:
make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.1.3. Next steps Copia collegamentoCollegamento copiato negli appunti!
- See Operator SDK tutorial for Helm-based Operators for a more in-depth walkthrough on building a Helm-based Operator.
5.6.2. Operator SDK tutorial for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:
- Create a Nginx deployment
-
Ensure that the deployment size is the same as specified by the
Nginxcustom resource (CR) spec -
Update the
NginxCR status using the status writer with the names of thenginxpods
This process is accomplished using two centerpieces of the Operator Framework:
- Operator SDK
-
The
operator-sdkCLI tool andcontroller-runtimelibrary API - Operator Lifecycle Manager (OLM)
- Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster
This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.
5.6.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Operator SDK CLI installed
-
OpenShift CLI (
oc) v4.8+ installed -
Logged into an OpenShift Container Platform 4.8 cluster with
ocwith an account that hascluster-adminpermissions - To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret
5.6.2.2. Creating a project Copia collegamentoCollegamento copiato negli appunti!
Use the Operator SDK CLI to create a project called nginx-operator.
Procedure
Create a directory for the project:
mkdir -p $HOME/projects/nginx-operator
$ mkdir -p $HOME/projects/nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the directory:
cd $HOME/projects/nginx-operator
$ cd $HOME/projects/nginx-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
operator-sdk initcommand with thehelmplugin to initialize the project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the
helmplugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the--helm-chartflag, to initialize a project using an existing Helm chart.The
initcommand creates thenginx-operatorproject specifically for watching a resource with API versionexample.com/v1and kindNginx.-
For Helm-based projects, the
initcommand generates the RBAC rules in theconfig/rbac/role.yamlfile based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.
5.6.2.2.1. Existing Helm charts Copia collegamentoCollegamento copiato negli appunti!
Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:
-
--helm-chart -
--helm-chart-repo -
--helm-chart-version
If the --helm-chart flag is specified, the --group, --version, and --kind flags become optional. If left unset, the following default values are used:
| Flag | Value |
|---|---|
|
|
|
|
|
|
|
|
|
|
| Deduced from the specified chart |
If the --helm-chart flag specifies a local chart archive, for example example-chart-1.2.0.tgz, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.
If a custom repository URL is not specified by the --helm-chart-repo flag, the following chart reference formats are supported:
| Format | Description |
|---|---|
|
|
Fetch the Helm chart named |
|
| Fetch the Helm chart archive at the specified URL. |
If a custom repository URL is specified by --helm-chart-repo, the following chart reference format is supported:
| Format | Description |
|---|---|
|
|
Fetch the Helm chart named |
If the --helm-chart-version flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version flag is not used when the chart specified with the --helm-chart flag refers to a specific version, for example when it is a local path or a URL.
For more details and examples, run:
operator-sdk init --plugins helm --help
$ operator-sdk init --plugins helm --help
5.6.2.2.2. PROJECT file Copia collegamentoCollegamento copiato negli appunti!
Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Helm. For example:
5.6.2.3. Understanding the Operator logic Copia collegamentoCollegamento copiato negli appunti!
For this example, the nginx-operator project executes the following reconciliation logic for each Nginx custom resource (CR):
- Create an Nginx deployment if it does not exist.
- Create an Nginx service if it does not exist.
- Create an Nginx ingress if it is enabled and does not exist.
-
Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the
NginxCR, for example the replica count, image, and service type.
By default, the nginx-operator project watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:
5.6.2.3.1. Sample Helm chart Copia collegamentoCollegamento copiato negli appunti!
When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.
If you are not already familiar with Helm charts, review the Helm developer documentation.
5.6.2.3.2. Modifying the custom resource spec Copia collegamentoCollegamento copiato negli appunti!
Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.
You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.
Procedure
The
helm-charts/nginx/values.yamlfile has a value calledreplicaCountset to1by default. To have two Nginx instances in your deployment, your CR spec must containreplicaCount: 2.Edit the
config/samples/demo_v1_nginx.yamlfile to setreplicaCount: 2:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Similarly, the default service port is set to
80. To use8080, edit theconfig/samples/demo_v1_nginx.yamlfile to setspec.port: 8080,which adds the service port override:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command.
5.6.2.4. Running the Operator Copia collegamentoCollegamento copiato negli appunti!
There are three ways you can use the Operator SDK CLI to build and run your Operator:
- Run locally outside the cluster as a Go program.
- Run as a deployment on the cluster.
- Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.
5.6.2.4.1. Running locally outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.
Procedure
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your
~/.kube/configfile and run the Operator locally:make install run
$ make install runCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.4.2. Running as a deployment on the cluster Copia collegamentoCollegamento copiato negli appunti!
You can run your Operator project as a deployment on your cluster.
Procedure
Run the following
makecommands to build and push the Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe name and tag of the image, for example
IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify theIMG ?= controller:latestvalue to set your default image name.
Run the following command to deploy the Operator:
make deploy IMG=<registry>/<user>/<image_name>:<tag>
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow By default, this command creates a namespace with the name of your Operator project in the form
<project_name>-systemand is used for the deployment. This command also installs the RBAC manifests fromconfig/rbac.Verify that the Operator is running:
oc get deployment -n <project_name>-system
$ oc get deployment -n <project_name>-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m
NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.4.3. Bundling an Operator and deploying with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
5.6.2.4.3.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.8+ installed - Operator project initialized by using the Operator SDK
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.2.4.3.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.8) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ [-n <namespace>] \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required objects, including RBAC.
5.6.2.5. Creating a custom resource Copia collegamentoCollegamento copiato negli appunti!
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Prerequisites
-
Example Nginx Operator, which provides the
NginxCR, installed on a cluster
Procedure
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the
make deploycommand:oc project nginx-operator-system
$ oc project nginx-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the sample
NginxCR manifest atconfig/samples/demo_v1_nginx.yamlto contain the following specification:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the
nginx-samplepod:oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sample$ oc adm policy add-scc-to-user \ anyuid system:serviceaccount:nginx-operator-system:nginx-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR:
oc apply -f config/samples/demo_v1_nginx.yaml
$ oc apply -f config/samples/demo_v1_nginx.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
NginxOperator creates the deployment for the sample CR with the correct size:oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the Nginx pod names.
Check the pods:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CR status:
oc get nginx/nginx-sample -o yaml
$ oc get nginx/nginx-sample -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the deployment size.
Update
config/samples/demo_v1_nginx.yamlfile to change thespec.sizefield in theNginxCR from3to5:oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=merge$ oc patch nginx nginx-sample \ -p '{"spec":{"replicaCount": 5}}' \ --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the deployment size:
oc get deployments
$ oc get deploymentsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m
NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up the resources that have been created as part of this tutorial.
If you used the
make deploycommand to test the Operator, run the following command:make undeploy
$ make undeployCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you used the
operator-sdk run bundlecommand to test the Operator, run the following command:operator-sdk cleanup <project_name>
$ operator-sdk cleanup <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6.3. Project layout for Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk CLI can generate, or scaffold, a number of packages and files for each Operator project.
5.6.3.1. Helm-based project layout Copia collegamentoCollegamento copiato negli appunti!
Helm-based Operator projects generated using the operator-sdk init --plugins helm command contain the following directories and files:
| File/folders | Purpose |
|---|---|
|
| Kustomize manifests for deploying the Operator on a Kubernetes cluster. |
|
|
Helm chart initialized with the |
|
|
Used to build the Operator image with the |
|
| Group/version/kind (GVK) and Helm chart location. |
|
| Targets used to manage the project. |
|
| YAML file containing metadata information for the Operator. |
5.6.4. Helm support in Operator SDK Copia collegamentoCollegamento copiato negli appunti!
5.6.4.1. Helm charts Copia collegamentoCollegamento copiato negli appunti!
One of the Operator SDK options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the spec field of the object is a list of configuration options that are typically described in the Helm values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a custom resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.
For an example of a simple CR called Tomcat:
The replicaCount value, 2 in this case, is propagated into the template of the chart where the following is used:
{{ .Values.replicaCount }}
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:
oc get Tomcats --all-namespaces
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a custom resource definition (CRD). Because it obeys RBAC, you can more easily prevent production changes.
5.7. Defining cluster service versions (CSVs) Copia collegamentoCollegamento copiato negli appunti!
A cluster service version (CSV), defined by a ClusterServiceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.
The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
5.7.1. How CSV generation works Copia collegamentoCollegamento copiato negli appunti!
Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.
Typically, the generate kustomize manifests subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle subcommand. However, the Operator SDK provides the make bundle command, which automates several tasks, including running the following subcommands in order:
-
generate kustomize manifests -
generate bundle -
bundle validate
5.7.1.1. Generated files and resources Copia collegamentoCollegamento copiato negli appunti!
The make bundle command creates the following files and directories in your Operator project:
-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersion(CSV) object -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
The following resources are typically included in a CSV:
- Role
- Defines Operator permissions within a namespace.
- ClusterRole
- Defines cluster-wide Operator permissions.
- Deployment
- Defines how an Operand of an Operator is run in pods.
- CustomResourceDefinition (CRD)
- Defines custom resources that your Operator reconciles.
- Custom resource examples
- Examples of resources adhering to the spec of a particular CRD.
5.7.1.2. Version management Copia collegamentoCollegamento copiato negli appunti!
The --version flag for the generate bundle subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.
By setting the VERSION variable in your Makefile, the --version flag is automatically invoked using that value when the generate bundle subcommand is run by the make bundle command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.
5.7.2. Manually-defined CSV fields Copia collegamentoCollegamento copiato negli appunti!
Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).
Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.
The following tables detail which manually-defined CSV fields are required and which are optional.
| Field | Description |
|---|---|
|
|
A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example |
|
|
The capability level according to the Operator maturity model. Options include |
|
| A public name to identify the Operator. |
|
| A short description of the functionality of the Operator. |
|
| Keywords describing the Operator. |
|
|
Human or organizational entities maintaining the Operator, with a |
|
|
The provider of the Operator (usually an organization), with a |
|
| Key-value pairs to be used by Operator internals. |
|
|
Semantic version of the Operator, for example |
|
|
Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
| Field | Description |
|---|---|
|
| The name of the CSV being replaced by this CSV. |
|
|
URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a |
|
| Selectors by which the Operator can pair resources in a cluster. |
|
|
A base64-encoded icon unique to the Operator, set in a |
|
|
The level of maturity the software has achieved at this version. Options include |
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.
5.7.2.1. Operator metadata annotations Copia collegamentoCollegamento copiato negli appunti!
Operator developers can manually define certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub.
The following table lists Operator metadata annotations that can be manually defined using metadata.annotations fields.
| Field | Description |
|---|---|
|
| Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize. |
|
|
Specify a single required custom resource by adding |
|
| Set a suggested namespace where the Operator should be deployed. |
|
| Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:
Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
|
|
|
Free-form array for listing any specific subscriptions that are required to use the Operator. For example, |
|
| Hides CRDs in the UI that are not meant for user manipulation. |
Example use cases
Operator supports disconnected and proxy-aware
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
Operator requires an OpenShift Container Platform license
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
Operator requires a 3scale license
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'
Operator supports disconnected and proxy-aware, and requires an OpenShift Container Platform license
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]' operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'
5.7.3. Enabling your Operator for restricted network environments Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.
Operator requirements for supporting disconnected mode
In the cluster service version (CSV) of your Operator:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
- All dependencies of your Operator must also support running in a disconnected mode.
- Your Operator must not require any off-cluster resources.
For the CSV requirements, you can make the following changes as the Operator author.
Prerequisites
- An Operator project with a CSV.
Procedure
Use SHA references to related images in two places in the CSV for your Operator:
Update
spec.relatedImages:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
envsection in the deployment when declaring environment variables that inject the image that the Operator should use:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen configuring probes, the
timeoutSecondsvalue must be lower than theperiodSecondsvalue. ThetimeoutSecondsdefault value is1. TheperiodSecondsdefault value is10.
Add the
disconnectedannotation, which indicates that the Operator works in a disconnected environment:metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'metadata: annotations: operators.openshift.io/infrastructure-features: '["disconnected"]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Operators can be filtered in OperatorHub by this infrastructure feature.
5.7.4. Enabling your Operator for multiple architectures and operating systems Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.
If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:
labels:
operatorframework.io/os.linux: supported
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:
labels:
operatorframework.io/arch.amd64: supported
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in the
metadata.labelsof your CSV for each supported architecture and operating system that your Operator supports:labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported operatorframework.io/arch.amd64: supported
labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported1 operatorframework.io/arch.amd64: supported2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.4.1. Architecture and operating system support for Operators Copia collegamentoCollegamento copiato negli appunti!
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
| Architecture | String |
|---|---|
| AMD64 |
|
| 64-bit PowerPC little-endian |
|
| IBM Z |
|
| Operating system | String |
|---|---|
| Linux |
|
| z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
5.7.5. Setting a suggested namespace Copia collegamentoCollegamento copiato negli appunti!
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.
As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
operatorframework.io/suggested-namespaceannotation to your suggested namespace:metadata: annotations: operatorframework.io/suggested-namespace: <namespace>metadata: annotations: operatorframework.io/suggested-namespace: <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set your suggested namespace.
5.7.6. Enabling Operator conditions Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition custom resource (CR), the behavior of OLM changes accordingly.
To support Operator conditions, an Operator must be able to read the OperatorCondition CR created by OLM and have the ability to complete the following tasks:
- Get the specific condition.
- Set the status of a specific condition.
This can be accomplished by using the operator-lib library. An Operator author can provide a controller-runtime client in their Operator for the library to access the OperatorCondition CR owned by the Operator in the cluster.
The library provides a generic Conditions interface, which has the following methods to Get and Set a conditionType in the OperatorCondition CR:
Get-
To get the specific condition, the library uses the
client.Getfunction fromcontroller-runtime, which requires anObjectKeyof typetypes.NamespacedNamepresent inconditionAccessor. Set-
To update the status of the specific condition, the library uses the
client.Updatefunction fromcontroller-runtime. An error occurs if theconditionTypeis not present in the CRD.
The Operator is allowed to modify only the status subresource of the CR. Operators can either delete or update the status.conditions array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream Condition GoDocs.
Operator SDK v1.8.0 supports operator-lib v0.3.0.
Prerequisites
- An Operator project generated using the Operator SDK.
Procedure
To enable Operator conditions in your Operator project:
In the
go.modfile of your Operator project, addoperator-framework/operator-libas a required library:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Write your own constructor in your Operator logic that will result in the following outcomes:
-
Accepts a
controller-runtimeclient. -
Accepts a
conditionType. -
Returns a
Conditioninterface to update or add conditions.
Because OLM currently supports the
Upgradeablecondition, you can create an interface that has methods to access theUpgradeablecondition. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
NewUpgradeableconstructor is further used to create a variablecondof typeCondition. Thecondvariable would in turn haveGetandSetmethods, which can be used for handling the OLMUpgradeablecondition.-
Accepts a
5.7.7. Defining webhooks Copia collegamentoCollegamento copiato negli appunti!
Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.
The cluster service version (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:
- Admission webhooks (validating and mutating)
- Conversion webhooks
Procedure
Add a
webhookdefinitionssection to thespecsection of the CSV of your Operator and include any webhook definitions using atypeofValidatingAdmissionWebhook,MutatingAdmissionWebhook, orConversionWebhook. The following example contains all three types of webhooks:CSV containing webhooks
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.7.1. Webhook considerations for OLM Copia collegamentoCollegamento copiato negli appunti!
When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:
-
The
typefield must be set to eitherValidatingAdmissionWebhook,MutatingAdmissionWebhook, orConversionWebhook, or the CSV will be placed in a failed phase. -
The CSV must contain a deployment whose name is equivalent to the value supplied in the
deploymentNamefield of thewebhookdefinition.
When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.
Certificate authority constraints
OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:
-
The TLS certificate file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.crt. -
The TLS key file is mounted to the deployment at
/apiserver.local.config/certificates/apiserver.key.
Admission webhook rules constraints
To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:
- Requests that target all groups
-
Requests that target the
operators.coreos.comgroup -
Requests that target the
ValidatingWebhookConfigurationsorMutatingWebhookConfigurationsresources
Conversion webhook constraints
OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:
-
CSVs featuring a conversion webhook can only support the
AllNamespacesinstall mode. -
The CRD targeted by the conversion webhook must have its
spec.preserveUnknownFieldsfield set tofalseornil. - The conversion webhook defined in the CSV must target an owned CRD.
- There can only be one conversion webhook on the entire cluster for a given CRD.
5.7.8. Understanding your custom resource definitions (CRDs) Copia collegamentoCollegamento copiato negli appunti!
There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.
5.7.8.1. Owned CRDs Copia collegamentoCollegamento copiato negli appunti!
The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of your CRD. | Required |
|
| The version of that object API. | Required |
|
| The machine readable name of your CRD. | Required |
|
|
A human readable version of your CRD name, for example | Required |
|
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
|
The API group that this CRD belongs to, for example | Optional |
|
|
Your CRDs own one or more types of Kubernetes objects. These are listed in the It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:
Example owned CRD
5.7.8.2. Required CRDs Copia collegamentoCollegamento copiato negli appunti!
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.
| Field | Description | Required/optional |
|---|---|---|
|
| The full name of the CRD you require. | Required |
|
| The version of that object API. | Required |
|
| The Kubernetes object kind. | Required |
|
| A human readable version of the CRD. | Required |
|
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
5.7.8.3. CRD upgrades Copia collegamentoCollegamento copiato negli appunti!
OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:
- All existing serving versions in the current CRD are present in the new CRD.
- All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
5.7.8.3.1. Adding a new CRD version Copia collegamentoCollegamento copiato negli appunti!
Procedure
To add a new version of a CRD to your Operator:
Add a new entry in the CRD resource under the
versionssection of your CSV.For example, if the current CRD has a version
v1alpha1and you want to add a new versionv1beta1and mark it as the new storage version, add a new entry forv1beta1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- New entry.
Ensure the referencing version of the CRD in the
ownedsection of your CSV is updated if the CSV intends to use the new version:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Update the
version.
- Push the updated CRD and CSV to your bundle.
5.7.8.3.2. Deprecating or removing a CRD version Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.
Procedure
To deprecate and remove a specific version of a CRD:
Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:
versions: - name: v1alpha1 served: false storage: trueversions: - name: v1alpha1 served: false1 storage: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to
false.
Switch the
storageversion to a serving version if the version to be deprecated is currently thestorageversion. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo remove a specific version that is or was the
storageversion from a CRD, that version must be removed from thestoredVersionin the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.- Upgrade the CRD with the above changes.
In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:
versions: - name: v1beta1 served: true storage: trueversions: - name: v1beta1 served: true storage: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure the referencing CRD version in the
ownedsection of your CSV is updated accordingly if that version is removed from the CRD.
5.7.8.4. CRD templates Copia collegamentoCollegamento copiato negli appunti!
Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.
The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.
The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
5.7.8.5. Hiding internal objects Copia collegamentoCollegamento copiato negli appunti!
It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.
As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.
Procedure
-
Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or
specblock of your CR, if applicable to your Operator. Add the
operators.operatorframework.io/internal-objectsannotation to the CSV of your Operator to specify any internal objects to hide in the user interface:Internal object annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set any internal CRDs as an array of strings.
5.7.8.6. Initializing required custom resources Copia collegamentoCollegamento copiato negli appunti!
An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.
As an Operator developer, you can specify a single required custom resource by adding operatorframework.io/initialization-resource to the cluster service version (CSV) during Operator installation. You are then prompted prompted to create the custom resource through a template that is provided in the CSV. The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.
If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.
Procedure
Add the
operatorframework.io/initialization-resourceannotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of aStorageClusterresource and provides a full YAML definition:Initialization resource annotation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.9. Understanding your API services Copia collegamentoCollegamento copiato negli appunti!
As with CRDs, there are two types of API services that your Operator may use: owned and required.
5.7.9.1. Owned API services Copia collegamentoCollegamento copiato negli appunti!
When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.
An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.
| Field | Description | Required/optional |
|---|---|---|
|
|
Group that the API service provides, for example | Required |
|
|
Version of the API service, for example | Required |
|
| A kind that the API service is expected to provide. | Required |
|
| The plural name for the API service provided. | Required |
|
|
Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the | Required |
|
|
A human readable version of your API service name, for example | Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
|
| Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user. | Optional |
|
| Essentially the same as for owned CRDs. | Optional |
5.7.9.1.1. API service resource creation Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:
-
Service pod selectors are copied from the CSV deployment matching the
DeploymentNamefield of the API service description. - A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.
5.7.9.1.2. API service serving certificates Copia collegamentoCollegamento copiato negli appunti!
OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.
The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.
If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.
5.7.9.2. Required API services Copia collegamentoCollegamento copiato negli appunti!
OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.
| Field | Description | Required/optional |
|---|---|---|
|
|
Group that the API service provides, for example | Required |
|
|
Version of the API service, for example | Required |
|
| A kind that the API service is expected to provide. | Required |
|
|
A human readable version of your API service name, for example | Required |
|
| A short description of how this API service is used by the Operator or a description of the functionality provided by the API service. | Required |
5.8. Working with bundle images Copia collegamentoCollegamento copiato negli appunti!
You can use the Operator SDK to package, deploy, and upgrade Operators in the bundle format for use on Operator Lifecycle Manager (OLM).
5.8.1. Bundling an Operator Copia collegamentoCollegamento copiato negli appunti!
The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.
Prerequisites
- Operator SDK CLI installed on a development workstation
-
OpenShift CLI (
oc) v4.8+ installed - Operator project initialized by using the Operator SDK
- If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Run the following
makecommands in your Operator project directory to build and push your Operator image. Modify theIMGargument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Build the image:
make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Dockerfile generated by the SDK for the Operator explicitly references
GOARCH=amd64forgo build. This can be amended toGOARCH=$TARGETARCHfor non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by–platform. With Buildah, the–build-argwill need to be used for the purpose. For more information, see Multiple Architectures.Push the image to a repository:
make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create your Operator bundle manifest by running the
make bundlecommand, which invokes several commands, including the Operator SDKgenerate bundleandbundle validatesubcommands:make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>
$ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bundle manifests for an Operator describe how to display, create, and manage an application. The
make bundlecommand creates the following files and directories in your Operator project:-
A bundle manifests directory named
bundle/manifeststhat contains aClusterServiceVersionobject -
A bundle metadata directory named
bundle/metadata -
All custom resource definitions (CRDs) in a
config/crddirectory -
A Dockerfile
bundle.Dockerfile
These files are then automatically validated by using
operator-sdk bundle validateto ensure the on-disk bundle representation is correct.-
A bundle manifests directory named
Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.
Build the bundle image. Set
BUNDLE_IMGwith the details for the registry, user namespace, and image tag where you intend to push the image:make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
$ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the bundle image:
docker push <registry>/<user>/<bundle_image_name>:<tag>
$ docker push <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.2. Deploying an Operator with Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.
The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.8) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions - If your Operator is Go-based, your project must be updated to use supported images for running on OpenShift Container Platform
Procedure
Enter the following command to run the Operator on the cluster:
operator-sdk run bundle \ [-n <namespace>] \ <registry>/<user>/<bundle_image_name>:<tag>$ operator-sdk run bundle \ [-n <namespace>] \1 <registry>/<user>/<bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the command installs the Operator in the currently active project in your
~/.kube/configfile. You can add the-nflag to set a different namespace scope for the installation.
This command performs the following actions:
- Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.
- Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.
-
Deploy your Operator to your cluster by creating an
OperatorGroup,Subscription,InstallPlan, and all other required objects, including RBAC.
5.8.3. Publishing a catalog containing a bundled Operator Copia collegamentoCollegamento copiato negli appunti!
To install and manage Operators, Operator Lifecycle Manager (OLM) requires that Operator bundles are listed in an index image, which is referenced by a catalog on the cluster. As an Operator author, you can use the Operator SDK to create an index containing the bundle for your Operator and all of its dependencies. This is useful for testing on remote clusters and publishing to container registries.
The Operator SDK uses the opm CLI to facilitate index image creation. Experience with the opm command is not required. For advanced use cases, the opm command can be used directly instead of the Operator SDK.
Prerequisites
- Operator SDK CLI installed on a development workstation
- Operator bundle image built and pushed to a registry
-
OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use
apiextensions.k8s.io/v1CRDs, for example OpenShift Container Platform 4.8) -
Logged in to the cluster with
ocusing an account withcluster-adminpermissions
Procedure
Run the following
makecommand in your Operator project directory to build an index image containing your Operator bundle:make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
$ make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where the
CATALOG_IMGargument references a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.Push the built index image to a repository:
make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>
$ make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can use Operator SDK
makecommands together if you would rather perform multiple actions in sequence at once. For example, if you had not yet built a bundle image for your Operator project, you can build and push both a bundle image and an index image with the following syntax:make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>$ make bundle-build bundle-push catalog-build catalog-push \ BUNDLE_IMG=<bundle_image_pull_spec> \ CATALOG_IMG=<index_image_pull_spec>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can set the
IMAGE_TAG_BASEfield in yourMakefileto an existing repository:IMAGE_TAG_BASE=quay.io/example/my-operator
IMAGE_TAG_BASE=quay.io/example/my-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can then use the following syntax to build and push images with automatically-generated names, such as
quay.io/example/my-operator-bundle:v0.0.1for the bundle image andquay.io/example/my-operator-catalog:v0.0.1for the index image:make bundle-build bundle-push catalog-build catalog-push
$ make bundle-build bundle-push catalog-build catalog-pushCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
CatalogSourceobject that references the index image you just generated, and then create the object by using theoc applycommand or web console:Example
CatalogSourceYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
imageto the image pull spec you used previously with theCATALOG_IMGargument.
Check the catalog source:
oc get catalogsource
$ oc get catalogsourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m
NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Install the Operator using your catalog:
Define an
OperatorGroupobject and create it by using theoc applycommand or web console:Example
OperatorGroupYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
Subscriptionobject and create it by using theoc applycommand or web console:Example
SubscriptionYAMLCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the installed Operator is running:
Check the Operator group:
oc get og
$ oc get ogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE my-test 4h40m
NAME AGE my-test 4h40mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cluster service version (CSV):
oc get csv
$ oc get csvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded
NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods for the Operator:
oc get pods
$ oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m
NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.4. Testing an Operator upgrade on Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
You can quickly test upgrading your Operator by using Operator Lifecycle Manager (OLM) integration in the Operator SDK, without requiring you to manually manage index images and catalog sources.
The run bundle-upgrade subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.
Prerequisites
-
Operator installed with OLM either by using the
run bundlesubcommand or with traditional OLM installation - A bundle image that represents a later version of the installed Operator
Procedure
If your Operator has not already been installed with OLM, install the earlier version either by using the
run bundlesubcommand or with traditional OLM installation.NoteIf the earlier version of the bundle was installed traditionally using OLM, the newer bundle that you intend to upgrade to must not exist in the index image referenced by the catalog source. Otherwise, running the
run bundle-upgradesubcommand will cause the registry pod to fail because the newer bundle is already referenced by the index that provides the package and cluster service version (CSV).For example, you can use the following
run bundlesubcommand for a Memcached Operator by specifying the earlier bundle image:operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1
$ operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the installed Operator by specifying the bundle image for the later Operator version:
operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2
$ operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the installed Operators:
operator-sdk cleanup memcached-operator
$ operator-sdk cleanup memcached-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.5. Controlling Operator compatibility with OpenShift Container Platform versions Copia collegamentoCollegamento copiato negli appunti!
Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. If your Operator is using a deprecated API, it might no longer work after the OpenShift Container Platform cluster is upgraded to the Kubernetes version where the API has been removed.
As an Operator author, it is strongly recommended that you review the Deprecated API Migration Guide in Kubernetes documentation and keep your Operator projects up to date to avoid using deprecated and removed APIs. Ideally, you should update your Operator before the release of a future version of OpenShift Container Platform that would make the Operator incompatible.
When an API is removed from an OpenShift Container Platform version, Operators running on that cluster version that are still using removed APIs will no longer work properly. As an Operator author, you should plan to update your Operator projects to accommodate API deprecation and removal to avoid interruptions for users of your Operator.
You can check the event alerts of your Operators running on OpenShift Container Platform 4.8 and later to find whether there are any warnings about APIs currently in use. The following alerts fire when they detect an API in use that will be removed in the next release:
APIRemovedInNextReleaseInUse- APIs that will be removed in the next OpenShift Container Platform release.
APIRemovedInNextEUSReleaseInUse- APIs that will be removed in the next OpenShift Container Platform Extended Update Support (EUS) release.
If a cluster administrator has installed your Operator, before they upgrade to the next version of OpenShift Container Platform, they must ensure a version of your Operator is installed that is compatible with that next cluster version. While it is recommended that you update your Operator projects to no longer use deprecated or removed APIs, if you still need to publish your Operator bundles with removed APIs for continued use on earlier versions of OpenShift Container Platform, ensure that the bundle is configured accordingly.
The following procedure helps prevent administrators from installing versions of your Operator on an incompatible version of OpenShift Container Platform. These steps also prevent administrators from upgrading to a newer version of OpenShift Container Platform that is incompatible with the version of your Operator that is currently installed on their cluster.
This procedure is also useful when you know that the current version of your Operator will not work well, for any reason, on a specific OpenShift Container Platform version. By defining the cluster versions where the Operator should be distributed, you ensure that the Operator does not appear in a catalog of a cluster version which is outside of the allowed range.
Operators that use deprecated APIs can adversely impact critical workloads when cluster administrators upgrade to a future version of OpenShift Container Platform where the API is no longer supported. If your Operator is using deprecated APIs, you should configure the following settings in your Operator project as soon as possible.
Prerequisites
- An existing Operator project
Procedure
If you know that a specific bundle of your Operator is not supported and will not work correctly on OpenShift Container Platform later than a certain cluster version, configure the maximum version of OpenShift Container Platform that your Operator is compatible with. In your Operator project’s cluster service version (CSV), set the
olm.maxOpenShiftVersionannotation to prevent administrators from upgrading their cluster before upgrading the installed Operator to a compatible version:ImportantYou must use
olm.maxOpenShiftVersionannotation only if your Operator bundle version cannot work in later versions. Be aware that cluster admins cannot upgrade their clusters with your solution installed. If you do not provide later version and a valid upgrade path, cluster admins may uninstall your Operator and can upgrade the cluster version.Example CSV with
olm.maxOpenShiftVersionannotationapiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: "olm.properties": '[{"type": "olm.maxOpenShiftVersion", "value": "<cluster_version>"}]'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the maximum cluster version of OpenShift Container Platform that your Operator is compatible with. For example, setting
valueto4.8prevents cluster upgrades to OpenShift Container Platform versions later than 4.8 when this bundle is installed on a cluster.
If your bundle is intended for distribution in a Red Hat-provided Operator catalog, configure the compatible versions of OpenShift Container Platform for your Operator by setting the following properties. This configuration ensures your Operator is only included in catalogs that target compatible versions of OpenShift Container Platform:
NoteThis step is only valid when publishing Operators in Red Hat-provided catalogs. If your bundle is only intended for distribution in a custom catalog, you can skip this step. For more details, see "Red Hat-provided Operator catalogs".
Set the
com.redhat.openshift.versionsannotation in your project’sbundle/metadata/annotations.yamlfile:Example
bundle/metadata/annotations.yamlfile with compatible versionscom.redhat.openshift.versions: "v4.6-v4.8"
com.redhat.openshift.versions: "v4.6-v4.8"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to a range or single version.
To prevent your bundle from being carried on to an incompatible version of OpenShift Container Platform, ensure that the index image is generated with the proper
com.redhat.openshift.versionslabel in your Operator’s bundle image. For example, if your project was generated using the Operator SDK, update thebundle.Dockerfilefile:Example
bundle.Dockerfilewith compatible versionsLABEL com.redhat.openshift.versions="<versions>"
LABEL com.redhat.openshift.versions="<versions>"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to a range or single version, for example,
v4.6-v4.8. This setting defines the cluster versions where the Operator should be distributed, and the Operator does not appear in a catalog of a cluster version which is outside of the range.
You can now bundle a new version of your Operator and publish the updated version to a catalog for distribution.
5.9. Validating Operators using the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:
- Validate that your Operator project is free of syntax errors and packaged correctly
- Review suggestions about ways you can improve your Operator
5.9.1. About the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
While the Operator SDK bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure, you can use the scorecard command to run tests on your Operator based on a configuration file and test images. These tests are implemented within test images that are configured and constructed to be executed by the scorecard.
The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OpenShift Container Platform. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.
Scorecard workflow
- Create all resources required by any related custom resources (CRs) and the Operator
- Create a proxy container in the deployment of the Operator to record calls to the API server and run tests
- Examine parameters in the CRs
The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.
scorecard command syntax
operator-sdk scorecard <bundle_dir_or_image> [flags]
$ operator-sdk scorecard <bundle_dir_or_image> [flags]
The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.
For further information about the flags, run:
operator-sdk scorecard -h
$ operator-sdk scorecard -h
5.9.2. Scorecard configuration Copia collegamentoCollegamento copiato negli appunti!
The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named config.yaml, which is generated by the make bundle command, located in your bundle/ directory:
./bundle
...
└── tests
└── scorecard
└── config.yaml
./bundle
...
└── tests
└── scorecard
└── config.yaml
Example scorecard configuration file
The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:
| Configuration field | Description |
|---|---|
|
| Test container image name that implements a test |
|
| Command and arguments that are invoked in the test image to execute a test |
|
| Scorecard-defined or custom labels that select which tests to run |
5.9.3. Built-in scorecard tests Copia collegamentoCollegamento copiato negli appunti!
The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.
| Test | Description | Short name |
|---|---|---|
| Spec Block Exists |
This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a |
|
| Test | Description | Short name |
|---|---|---|
| Bundle Validation | This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
| Provided APIs Have Validation |
This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each |
|
| Owned CRDs Have Resources Listed |
This test makes sure that the CRDs for each CR provided via the |
|
| Spec Fields With Descriptors |
This test verifies that every field in the CRs |
|
| Status Fields With Descriptors |
This test verifies that every field in the CRs |
|
5.9.4. Running the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
A default set of Kustomize files are generated by the Operator SDK after running the init command. The default bundle/tests/scorecard/config.yaml file that is generated can be immediately used to run the scorecard tool against your Operator, or you can modify this file to your test specifications.
Prerequisites
- Operator project generated by using the Operator SDK
Procedure
Generate or regenerate your bundle manifests and metadata for your Operator:
make bundle
$ make bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command automatically adds scorecard annotations to your bundle metadata, which is used by the
scorecardcommand to run tests.Run the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:
operator-sdk scorecard <bundle_dir_or_image>
$ operator-sdk scorecard <bundle_dir_or_image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.5. Scorecard output Copia collegamentoCollegamento copiato negli appunti!
The --output flag for the scorecard command specifies the scorecard results output format: either text or json.
Example 5.29. Example JSON output snippet
Example 5.30. Example text output snippet
The output format spec matches the Test type layout.
5.9.6. Selecting tests Copia collegamentoCollegamento copiato negli appunti!
Scorecard tests are selected by setting the --selector CLI flag to a set of label strings. If a selector flag is not supplied, then all the tests within the scorecard configuration file are run.
Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.
Procedure
To select a single test, for example
basic-check-spec-test, specify the test by using the--selectorflag:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-test$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=test=basic-check-spec-testCopy to Clipboard Copied! Toggle word wrap Toggle overflow To select a suite of tests, for example
olm, specify a label that is used by all of the OLM tests:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olm$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector=suite=olmCopy to Clipboard Copied! Toggle word wrap Toggle overflow To select multiple tests, specify the test names by using the
selectorflag using the following syntax:operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'$ operator-sdk scorecard <bundle_dir_or_image> \ -o text \ --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.7. Enabling parallel testing Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable parallel setting.
By default, or when a stage explicitly sets parallel to false, tests in a stage are run sequentially in the order they are defined in the configuration file. Running tests one at a time is helpful to guarantee that no two tests interact and conflict with each other.
However, if tests are designed to be fully isolated, they can be parallelized.
Procedure
To run a set of isolated tests in parallel, include them in the same stage and set
paralleltotrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables parallel testing
All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.
5.9.8. Custom scorecard tests Copia collegamentoCollegamento copiato negli appunti!
The scorecard tool can run custom tests that follow these mandated conventions:
- Tests are implemented within a container image
- Tests accept an entrypoint which include a command and arguments
-
Tests produce
v1alpha3scorecard output in JSON format with no extraneous logging in the test output -
Tests can obtain the bundle contents at a shared mount point of
/bundle - Tests can access the Kubernetes API using an in-cluster client connection
Writing custom tests in other programming languages is possible if the test image follows the above guidelines.
The following example shows of a custom test image written in Go:
Example 5.31. Example custom scorecard test
5.10. Configuring built-in monitoring with Prometheus Copia collegamentoCollegamento copiato negli appunti!
This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.
5.10.1. Prometheus Operator support Copia collegamentoCollegamento copiato negli appunti!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
5.10.2. Metrics helper Copia collegamentoCollegamento copiato negli appunti!
In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:
func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)
func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)
These metrics are inherited from the controller-runtime library API. By default, the metrics are served on 0.0.0.0:8383/metrics.
A Service object is created with the metrics port exposed, which can be then accessed by Prometheus. The Service object is garbage collected when the leader pod’s root owner is deleted.
The following example is present in the cmd/manager/main.go file in all Operators generated using the Operator SDK:
5.10.2.1. Modifying the metrics port Copia collegamentoCollegamento copiato negli appunti!
Operator authors can modify the port that metrics are exposed on.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
In the
cmd/manager/main.gofile of the generated Operator, change the value ofmetricsPortin the following line:var metricsPort int32 = 8383
var metricsPort int32 = 8383Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10.3. Service monitors Copia collegamentoCollegamento copiato negli appunti!
A ServiceMonitor is a custom resource provided by the Prometheus Operator that discovers the Endpoints in Service objects and configures Prometheus to monitor those pods.
In Go-based Operators generated using the Operator SDK, the GenerateServiceMonitor() helper function can take a Service object and generate a ServiceMonitor object based on it.
5.10.3.1. Creating service monitors Copia collegamentoCollegamento copiato negli appunti!
Operator authors can add service target discovery of created monitoring services using the metrics.CreateServiceMonitor() helper function, which accepts the newly created service.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
Add the
metrics.CreateServiceMonitor()helper function to your Operator code:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.11. Configuring leader election Copia collegamentoCollegamento copiato negli appunti!
During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own trade-off:
- Leader-for-life
-
The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, the
pod-eviction-timeoutdictates long how it takes for the leader pod to be deleted from the node and step down, with a default of5m. See the Leader-for-life Go documentation for more. - Leader-with-lease
- The leader pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.
By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case.
5.11.1. Operator leader election examples Copia collegamentoCollegamento copiato negli appunti!
The following examples illustrate how to use the two leader election options for an Operator, Leader-for-life and Leader-with-lease.
5.11.1.1. Leader-for-life election Copia collegamentoCollegamento copiato negli appunti!
With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the config map named memcached-operator-lock:
If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the name of the Operator.
5.11.1.2. Leader-with-lease election Copia collegamentoCollegamento copiato negli appunti!
The Leader-with-lease implementation can be enabled using the Manager Options for leader election:
When the Operator is not running in a cluster, the Manager returns an error when starting because it cannot detect the namespace of the Operator to create the config map for leader election. You can override this namespace by setting the LeaderElectionNamespace option for the Manager.
5.12. Migrating package manifest projects to bundle format Copia collegamentoCollegamento copiato negli appunti!
Support for the legacy package manifest format for Operators is removed in OpenShift Container Platform 4.8 and later. If you have an Operator project that was initially created using the package manifest format, you can use the Operator SDK to migrate the project to the bundle format. The bundle format is the preferred packaging format for Operator Lifecycle Manager (OLM) starting in OpenShift Container Platform 4.6.
5.12.1. About packaging format migration Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK pkgman-to-bundle command helps in migrating Operator Lifecycle Manager (OLM) package manifests to bundles. The command takes an input package manifest directory and generates bundles for each of the versions of manifests present in the input directory. You can also then build bundle images for each of the generated bundles.
For example, consider the following packagemanifests/ directory for a project in the package manifest format:
Example package manifest format layout
After running the migration, the following bundles are generated in the bundle/ directory:
Example bundle format layout
Based on this generated layout, bundle images for both of the bundles are also built with the following names:
-
quay.io/example/etcd:0.0.1 -
quay.io/example/etcd:0.0.2
5.12.2. Migrating a package manifest project to bundle format Copia collegamentoCollegamento copiato negli appunti!
Operator authors can use the Operator SDK to migrate a package manifest format Operator project to a bundle format project.
Prerequisites
- Operator SDK CLI installed
- Operator project initially generated using the Operator SDK in package manifest format
Procedure
Use the Operator SDK to migrate your package manifest project to the bundle format and generate bundle images:
operator-sdk pkgman-to-bundle <package_manifests_dir> \ [--output-dir <directory>] \ --image-tag-base <image_name_base>$ operator-sdk pkgman-to-bundle <package_manifests_dir> \1 [--output-dir <directory>] \2 --image-tag-base <image_name_base>3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the location of the package manifests directory for the project, such as
packagemanifests/ormanifests/. - 2
- Optional: By default, the generated bundles are written locally to disk to the
bundle/directory. You can use the--output-dirflag to specify an alternative location. - 3
- Set the
--image-tag-baseflag to provide the base of the image name, such asquay.io/example/etcd, that will be used for the bundles. Provide the name without a tag, because the tag for the images will be set according to the bundle version. For example, the full bundle image names are generated in the format<image_name_base>:<bundle_version>.
Verification
Verify that the generated bundle image runs successfully:
operator-sdk run bundle <bundle_image_name>:<tag>
$ operator-sdk run bundle <bundle_image_name>:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.13. Operator SDK CLI reference Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier.
Operator SDK CLI syntax
operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work.
5.13.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk bundle command manages Operator bundle metadata.
5.13.1.1. validate Copia collegamentoCollegamento copiato negli appunti!
The bundle validate subcommand validates an Operator bundle.
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are |
|
| List all optional validators available. When set, no validators are run. |
|
|
Label selector to select optional validators to run. When run with the |
5.13.2. cleanup Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command.
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Time to wait for the command to complete before failing. The default value is |
5.13.3. completion Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.
| Subcommand | Description |
|---|---|
|
| Generate bash completions. |
|
| Generate zsh completions. |
| Flag | Description |
|---|---|
|
| Usage help output. |
For example:
operator-sdk completion bash
$ operator-sdk completion bash
Example output
bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh
# bash completion for operator-sdk -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh
5.13.4. create Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk create command is used to create, or scaffold, a Kubernetes API.
5.13.4.1. api Copia collegamentoCollegamento copiato negli appunti!
The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command.
| Flag | Description |
|---|---|
|
|
Help output for the |
5.13.5. generate Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk generate command invokes a specific generator to generate code or manifests.
5.13.5.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project.
Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence.
| Flag | Description |
|---|---|
|
|
Comma-separated list of channels to which the bundle belongs. The default value is |
|
|
Root directory for |
|
| The default channel for the bundle. |
|
|
Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the |
|
|
Help for |
|
|
Directory from which to read an existing bundle. This directory is the parent of your bundle |
|
|
Directory containing Kustomize bases and a |
|
| Generate bundle manifests. |
|
| Generate bundle metadata and Dockerfile. |
|
| Directory to write the bundle to. |
|
|
Overwrite the bundle metadata and Dockerfile if they exist. The default value is |
|
| Package name for the bundle. |
|
| Run in quiet mode. |
|
| Write bundle manifest to standard out. |
|
| Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. |
5.13.5.2. kustomize Copia collegamentoCollegamento copiato negli appunti!
The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator.
5.13.5.2.1. manifests Copia collegamentoCollegamento copiato negli appunti!
The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag.
| Flag | Description |
|---|---|
|
| Root directory for API type definitions. |
|
|
Help for |
|
| Directory containing existing Kustomize files. |
|
|
When set to |
|
| Directory where to write Kustomize files. |
|
| Package name. |
|
| Run in quiet mode. |
5.13.6. init Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk init command initializes an Operator project and generates, or scaffolds, a default project directory layout for the given plugin.
This command writes the following files:
- Boilerplate license file
-
PROJECTfile with the domain and repository -
Makefileto build the project -
go.modfile with project dependencies -
kustomization.yamlfile for customizing manifests - Patch file for customizing images for manager manifests
- Patch file for enabling Prometheus metrics
-
main.gofile to run
| Flag | Description |
|---|---|
|
|
Help output for the |
|
|
Name and optionally version of the plugin to initialize the project with. Available plugins are |
|
|
Project version. Available values are |
5.13.7. run Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk run command provides options that can launch the Operator in various environments.
5.13.7.1. bundle Copia collegamentoCollegamento copiato negli appunti!
The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM).
| Flag | Description |
|---|---|
|
|
Index image in which to inject a bundle. The default image is |
|
|
Install mode supported by the cluster service version (CSV) of the Operator, for example |
|
|
Install timeout. The default value is |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Help output for the |
5.13.7.2. bundle-upgrade Copia collegamentoCollegamento copiato negli appunti!
The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM).
| Flag | Description |
|---|---|
|
|
Upgrade timeout. The default value is |
|
|
Path to the |
|
| If present, namespace in which to run the CLI request. |
|
|
Help output for the |
5.13.8. scorecard Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely.
| Flag | Description |
|---|---|
|
|
Path to scorecard configuration file. The default path is |
|
|
Help output for the |
|
|
Path to |
|
| List which tests are available to run. |
|
| Namespace in which to run the test images. |
|
|
Output format for results. Available values are |
|
| Label selector to determine which tests are run. |
|
|
Service account to use for tests. The default value is |
|
| Disable resource cleanup after tests are run. |
|
|
Seconds to wait for tests to complete, for example |