This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 12. Operator SDK
12.1. Getting started with the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
This guide outlines the basics of the Operator SDK and walks Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) through an example of building a simple Go-based Memcached Operator and managing its lifecycle from installation to upgrade.
This is accomplished using two centerpieces of the Operator Framework: the Operator SDK (the operator-sdk
CLI tool and controller-runtime
library API) and the Operator Lifecycle Manager (OLM).
OpenShift Container Platform 4.4 supports Operator SDK v0.15.0 or later.
12.1.1. Architecture of the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes' extensibility to deliver the automation advantages of cloud services like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.
Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.
The Operator SDK is a framework designed to make writing Operators easier by providing:
- High-level APIs and abstractions to write the operational logic more intuitively
- Tools for scaffolding and code generation to quickly bootstrap a new project
- Extensions to cover common Operator use cases
12.1.1.1. Workflow Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK provides the following workflow to develop a new Operator:
- Create a new Operator project using the Operator SDK command line interface (CLI).
- Define new resource APIs by adding Custom Resource Definitions (CRDs).
- Specify resources to watch using the Operator SDK API.
- Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
- Use the Operator SDK CLI to build and generate the Operator deployment manifests.
Figure 12.1. Operator SDK workflow
At a high level, an Operator using the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.
12.1.1.2. Manager file Copia collegamentoCollegamento copiato negli appunti!
The main program for the Operator is the manager file at cmd/manager/main.go
. The manager automatically registers the scheme for all Custom Resources (CRs) defined under pkg/apis/
and runs all controllers under pkg/controller/
.
The manager can restrict the namespace that all controllers watch for resources:
mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})
mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})
By default, this is the namespace that the Operator is running in. To watch all namespaces, you can leave the namespace option empty:
mgr, err := manager.New(cfg, manager.Options{Namespace: ""})
mgr, err := manager.New(cfg, manager.Options{Namespace: ""})
12.1.1.3. Prometheus Operator support Copia collegamentoCollegamento copiato negli appunti!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
12.1.2. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.
12.1.2.1. Installing from GitHub release Copia collegamentoCollegamento copiato negli appunti!
You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.
Prerequisites
- Go v1.13+
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Set the release version variable:
RELEASE_VERSION=v0.15.0
RELEASE_VERSION=v0.15.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the release binary.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the downloaded release binary.
Download the provided ASC file.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:
For Linux:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you do not have the maintainer’s public key on your workstation, you will get the following error:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' gpg: Signature made Fri Apr 5 20:03:22 2019 CEST gpg: using RSA key <key_id> gpg: Can't check signature: No public key
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' $ gpg: Signature made Fri Apr 5 20:03:22 2019 CEST $ gpg: using RSA key <key_id>
1 $ gpg: Can't check signature: No public key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- RSA key string.
To download the key, run the following command, replacing
<key_id>
with the RSA key string provided in the output of the previous command:gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
$ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you do not have a key server configured, specify one with the
--keyserver
option.
Install the release binary in your
PATH
:For Linux:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.2.2. Installing from Homebrew Copia collegamentoCollegamento copiato negli appunti!
You can install the SDK CLI using Homebrew.
Prerequisites
- Homebrew
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Install the SDK CLI using the
brew
command:brew install operator-sdk
$ brew install operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.2.3. Compiling and installing from source Copia collegamentoCollegamento copiato negli appunti!
You can obtain the Operator SDK source code to compile and install the SDK CLI.
Prerequisites
Procedure
Clone the
operator-sdk
repository:mkdir -p $GOPATH/src/github.com/operator-framework cd $GOPATH/src/github.com/operator-framework git clone https://github.com/operator-framework/operator-sdk cd operator-sdk
$ mkdir -p $GOPATH/src/github.com/operator-framework $ cd $GOPATH/src/github.com/operator-framework $ git clone https://github.com/operator-framework/operator-sdk $ cd operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check out the desired release branch:
git checkout master
$ git checkout master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Compile and install the SDK CLI:
make dep make install
$ make dep $ make install
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This installs the CLI binary
operator-sdk
at $GOPATH/bin.Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.3. Building a Go-based Operator using the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. The SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code needed for many common management capabilities, such as metering or monitoring.
This procedure walks through an example of building a simple Memcached Operator using tools and libraries provided by the SDK.
Prerequisites
- Operator SDK CLI installed on the development workstation
-
Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.8 or above to support the
apps/v1beta2
API group), for example OpenShift Container Platform 4.4 -
Access to the cluster using an account with
cluster-admin
permissions -
OpenShift CLI (
oc
) v4.1+ installed
Procedure
Create a new project.
Use the CLI to create a new
memcached-operator
project:mkdir -p $GOPATH/src/github.com/example-inc/ cd $GOPATH/src/github.com/example-inc/ operator-sdk new memcached-operator cd memcached-operator
$ mkdir -p $GOPATH/src/github.com/example-inc/ $ cd $GOPATH/src/github.com/example-inc/ $ operator-sdk new memcached-operator $ cd memcached-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new Custom Resource Definition (CRD).
Use the CLI to add a new CRD API called
Memcached
, withAPIVersion
set tocache.example.com/v1apha1
andKind
set toMemcached
:operator-sdk add api \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached
$ operator-sdk add api \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This scaffolds the Memcached resource API under
pkg/apis/cache/v1alpha1/
.Modify the spec and status of the
Memcached
Custom Resource (CR) at thepkg/apis/cache/v1alpha1/memcached_types.go
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After modifying the
*_types.go
file, always run the following command to update the generated code for that resource type:operator-sdk generate k8s
$ operator-sdk generate k8s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Add custom validation to your CRD.
OpenAPI v3.0 schemas are added to CRD manifests in the
spec.validation
block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached CR when it is created or updated.Additionally, a
pkg/apis/<group>/<version>/zz_generated.openapi.go
file is generated. This file contains the Go representation of this validation block if the+k8s:openapi-gen=true annotation
is present above theKind
type declaration, which is present by default. This auto-generated code is your GoKind
type’s OpenAPI model, from which you can create a full OpenAPI Specification and generate a client.As an Operator author, you can use Kubebuilder markers (annotations) to configure custom validations for your API. These markers must always have a
+kubebuilder:validation
prefix. For example, adding an enum-type specification can be done by adding the following marker:// +kubebuilder:validation:Enum=Lion;Wolf;Dragon type Alias string
// +kubebuilder:validation:Enum=Lion;Wolf;Dragon type Alias string
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Usage of markers in API code is discussed in the Kubebuilder Generating CRDs and Markers for Config/Code Generation documentation. A full list of OpenAPIv3 validation markers is also available in the Kubebuilder CRD Validation documentation.
If you add any custom validations, run the following command to update the OpenAPI validation section in the CRD’s
deploy/crds/cache.example.com_memcacheds_crd.yaml
file:operator-sdk generate crds
$ operator-sdk generate crds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example generated YAML
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new Controller.
Add a new Controller to the project to watch and reconcile the Memcached resource:
operator-sdk add controller \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached
$ operator-sdk add controller \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This scaffolds a new Controller implementation under
pkg/controller/memcached/
.For this example, replace the generated controller file
pkg/controller/memcached/memcached_controller.go
with the example implementation.The example controller executes the following reconciliation logic for each
Memcached
CR:- Create a Memcached Deployment if it does not exist.
-
Ensure that the Deployment size is the same as specified by the
Memcached
CR spec. -
Update the
Memcached
CR status with the names of the Memcached pods.
The next two sub-steps inspect how the Controller watches resources and how the reconcile loop is triggered. You can skip these steps to go directly to building and running the Operator.
Inspect the Controller implementation at the
pkg/controller/memcached/memcached_controller.go
file to see how the Controller watches resources.The first watch is for the Memcached type as the primary resource. For each Add, Update, or Delete event, the reconcile loop is sent a reconcile
Request
(a<namespace>:<name>
key) for that Memcached object:err := c.Watch( &source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})
err := c.Watch( &source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The next watch is for Deployments, but the event handler maps each event to a reconcile
Request
for the owner of the Deployment. In this case, this is the Memcached object for which the Deployment was created. This allows the controller to watch Deployments as a secondary resource:err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, })
err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, })
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Every Controller has a Reconciler object with a
Reconcile()
method that implements the reconcile loop. The reconcile loop is passed theRequest
argument which is a<namespace>:<name>
key used to lookup the primary resource object, Memcached, from the cache:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the return value of
Reconcile()
the reconcileRequest
may be requeued and the loop may be triggered again:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Build and run the Operator.
Before running the Operator, the CRD must be registered with the Kubernetes API server:
oc create \ -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
$ oc create \ -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After registering the CRD, there are two options for running the Operator:
- As a Deployment inside a Kubernetes cluster
- As Go program outside a cluster
Choose one of the following methods.
Option A: Running as a Deployment inside the cluster.
Build the
memcached-operator
image and push it to a registry:operator-sdk build quay.io/example/memcached-operator:v0.0.1
$ operator-sdk build quay.io/example/memcached-operator:v0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Deployment manifest is generated at
deploy/operator.yaml
. Update the Deployment image as follows since the default is just a placeholder:sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure you have an account on Quay.io for the next step, or substitute your preferred container registry. On the registry, create a new public image repository named
memcached-operator
. Push the image to the registry:
podman push quay.io/example/memcached-operator:v0.0.1
$ podman push quay.io/example/memcached-operator:v0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Setup RBAC and deploy
memcached-operator
:oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml oc create -f deploy/service_account.yaml oc create -f deploy/operator.yaml
$ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml $ oc create -f deploy/service_account.yaml $ oc create -f deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that
memcached-operator
is up and running:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE memcached-operator 1 1 1 1 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Option B: Running locally outside the cluster.
This method is preferred during development cycle to deploy and test faster.
Run the Operator locally with the default Kubernetes configuration file present at
$HOME/.kube/config
:operator-sdk run --local --namespace=default
$ operator-sdk run --local --namespace=default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use a specific
kubeconfig
using the flag--kubeconfig=<path/to/kubeconfig>
.
Verify that the Operator can deploy a Memcached application by creating a Memcached CR.
Create the example
Memcached
CR that was generated atdeploy/crds/cache_v1alpha1_memcached_cr.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that
memcached-operator
creates the Deployment for the CR:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE memcached-operator 1 1 1 1 2m example-memcached 3 3 3 3 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods and CR status to confirm the status is updated with the
memcached
pod names:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the Operator can manage a deployed Memcached application by updating the size of the deployment.
Change the
spec.size
field in thememcached
CR from3
to4
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the change:
oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
$ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the Deployment size:
oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-memcached 4 4 4 4 5m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up the resources:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- For more information about OpenAPI v3.0 validation schemas in CRDs, refer to the Kubernetes documentation.
12.1.4. Managing a Go-based Operator using the Operator Lifecycle Manager Copia collegamentoCollegamento copiato negli appunti!
The previous section has covered manually running an Operator. In the next sections, we will explore using the Operator Lifecycle Manager (OLM), which is what enables a more robust deployment model for Operators being run in production environments.
The OLM helps you to install, update, and generally manage the lifecycle of all of the Operators (and their associated services) on a Kubernetes cluster. It runs as an Kubernetes extension and lets you use oc
for all the lifecycle management functions without any additional tools.
Prerequisites
-
OLM installed on a Kubernetes-based cluster (v1.8 or above to support the
apps/v1beta2
API group), for example OpenShift Container Platform 4.4 Preview OLM enabled - Memcached Operator built
Procedure
Generate an Operator manifest.
An Operator manifest describes how to display, create, and manage the application, in this case Memcached, as a whole. It is defined by a
ClusterServiceVersion
(CSV) object and is required for the OLM to function.From the
memcached-operator/
directory that was created when you built the Memcached Operator, generate the CSV manifest:operator-sdk generate csv --csv-version 0.0.1
$ operator-sdk generate csv --csv-version 0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSee Building a CSV for the Operator Framework for more information on manually defining a manifest file.
Create an OperatorGroup that specifies the namespaces that the Operator will target. Create the following OperatorGroup in the namespace where you will create the CSV. In this example, the
default
namespace is used:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Operator. Use the files that were generated into the
deploy/
directory by the Operator SDK when you built the Memcached Operator.Apply the Operator’s CSV manifest to the specified namespace in the cluster:
oc apply -f deploy/olm-catalog/memcached-operator/0.0.1/memcached-operator.v0.0.1.clusterserviceversion.yaml
$ oc apply -f deploy/olm-catalog/memcached-operator/0.0.1/memcached-operator.v0.0.1.clusterserviceversion.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you apply this manifest, the cluster does not immediately update because it does not yet meet the requirements specified in the manifest.
Create the role, role binding, and service account to grant resource permissions to the Operator, and the Custom Resource Definition (CRD) to create the Memcached type that the Operator manages:
oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml oc create -f deploy/service_account.yaml oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml
$ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml $ oc create -f deploy/service_account.yaml $ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the OLM creates Operators in a particular namespace when a manifest is applied, administrators can leverage the native Kubernetes RBAC permission model to restrict which users are allowed to install Operators.
Create an application instance.
The Memcached Operator is now running in the
default
namespace. Users interact with Operators via instances ofCustomResources
; in this case, the resource has the kindMemcached
. Native Kubernetes RBAC also applies toCustomResources
, providing administrators control over who can interact with each Operator.Creating instances of Memcached in this namespace will now trigger the Memcached Operator to instantiate pods running the memcached server that are managed by the Operator. The more
CustomResources
you create, the more unique instances of Memcached are managed by the Memcached Operator running in this namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update an application.
Manually apply an update to the Operator by creating a new Operator manifest with a
replaces
field that references the old Operator manifest. The OLM ensures that all resources being managed by the old Operator have their ownership moved to the new Operator without fear of any programs stopping execution. It is up to the Operators themselves to execute any data migrations required to upgrade resources to run under a new version of the Operator.The following command demonstrates applying a new Operator manifest file using a new version of the Operator and shows that the pods remain executing:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.1.5. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- See Appendices to learn about the project directory structures created by the Operator SDK.
- Operator Development Guide for Red Hat Partners
12.2. Creating Ansible-based Operators Copia collegamentoCollegamento copiato negli appunti!
This guide outlines Ansible support in the Operator SDK and walks Operator authors through examples building and running Ansible-based Operators with the operator-sdk
CLI tool that use Ansible playbooks and modules.
12.2.1. Ansible support in the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.
One of the Operator SDK’s options for generating an Operator project includes leveraging existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.
12.2.1.1. Custom Resource files Copia collegamentoCollegamento copiato negli appunti!
Operators use the Kubernetes' extension mechanism, Custom Resource Definitions (CRDs), so your Custom Resource (CR) looks and acts just like the built-in, native Kubernetes objects.
The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:
Field | Description |
---|---|
| Version of the CR to be created. |
| Kind of the CR to be created. |
| Kubernetes-specific metadata to be created. |
| Key-value list of variables which are passed to Ansible. This field is empty by default. |
|
Summarizes the current state of the object. For Ansible-based Operators, the |
| Kubernetes-specific annotations to be appended to the CR. |
The following list of CR annotations modify the behavior of the Operator:
Annotation | Description |
---|---|
|
Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package |
Example Ansible-based Operator annotation
12.2.1.2. Watches file Copia collegamentoCollegamento copiato negli appunti!
The Watches file contains a list of mappings from Custom Resources (CRs), identified by its Group
, Version
, and Kind
, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location, /opt/ansible/watches.yaml
.
Field | Description |
---|---|
| Group of CR to watch. |
| Version of CR to watch. |
| Kind of CR to watch |
|
Path to the Ansible role added to the container. For example, if your |
|
Path to the Ansible playbook added to the container. This playbook is expected to be simply a way to call roles. This field is mutually exclusive with the |
| The reconciliation interval, how often the role or playbook is run, for a given CR. |
|
When set to |
Example Watches file
12.2.1.2.1. Advanced options Copia collegamentoCollegamento copiato negli appunti!
Advanced features can be enabled by adding them to your Watches file per GVK (group, version, and kind). They can go below the group
, version
, kind
and playbook
or role
fields.
Some features can be overridden per resource using an annotation on that Custom Resource (CR). The options that can be overridden have the annotation specified below.
Feature | YAML key | Description | Annotation for override | Default value |
---|---|---|---|---|
Reconcile period |
| Time between reconcile runs for a particular CR. |
|
|
Manage status |
|
Allows the Operator to manage the |
| |
Watch dependent resources |
| Allows the Operator to dynamically watch resources that are created by Ansible. |
| |
Watch cluster-scoped resources |
| Allows the Operator to watch cluster-scoped resources that are created by Ansible. |
| |
Max runner artifacts |
| Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource. |
|
|
Example Watches file with advanced options
12.2.1.3. Extra variables sent to Ansible Copia collegamentoCollegamento copiato negli appunti!
Extra variables can be sent to Ansible, which are then managed by the Operator. The spec
section of the Custom Resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook
command.
The Operator also passes along additional variables under the meta
field for the name of the CR and the namespace of the CR.
For the following CR example:
The structure passed to Ansible as extra variables is:
The message
and newParameter
fields are set in the top level as extra variables, and meta
provides the relevant metadata for the CR as defined in the Operator. The meta
fields can be accessed using dot notation in Ansible, for example:
- debug: msg: "name: {{ meta.name }}, {{ meta.namespace }}"
- debug:
msg: "name: {{ meta.name }}, {{ meta.namespace }}"
12.2.1.4. Ansible Runner directory Copia collegamentoCollegamento copiato negli appunti!
Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>
.
Additional resources
-
To learn more about the
runner
directory, see the Ansible Runner documentation.
12.2.2. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.
12.2.2.1. Installing from GitHub release Copia collegamentoCollegamento copiato negli appunti!
You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.
Prerequisites
- Go v1.13+
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Set the release version variable:
RELEASE_VERSION=v0.15.0
RELEASE_VERSION=v0.15.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the release binary.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the downloaded release binary.
Download the provided ASC file.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:
For Linux:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you do not have the maintainer’s public key on your workstation, you will get the following error:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' gpg: Signature made Fri Apr 5 20:03:22 2019 CEST gpg: using RSA key <key_id> gpg: Can't check signature: No public key
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' $ gpg: Signature made Fri Apr 5 20:03:22 2019 CEST $ gpg: using RSA key <key_id>
1 $ gpg: Can't check signature: No public key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- RSA key string.
To download the key, run the following command, replacing
<key_id>
with the RSA key string provided in the output of the previous command:gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
$ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you do not have a key server configured, specify one with the
--keyserver
option.
Install the release binary in your
PATH
:For Linux:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2.2. Installing from Homebrew Copia collegamentoCollegamento copiato negli appunti!
You can install the SDK CLI using Homebrew.
Prerequisites
- Homebrew
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Install the SDK CLI using the
brew
command:brew install operator-sdk
$ brew install operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2.3. Compiling and installing from source Copia collegamentoCollegamento copiato negli appunti!
You can obtain the Operator SDK source code to compile and install the SDK CLI.
Prerequisites
Procedure
Clone the
operator-sdk
repository:mkdir -p $GOPATH/src/github.com/operator-framework cd $GOPATH/src/github.com/operator-framework git clone https://github.com/operator-framework/operator-sdk cd operator-sdk
$ mkdir -p $GOPATH/src/github.com/operator-framework $ cd $GOPATH/src/github.com/operator-framework $ git clone https://github.com/operator-framework/operator-sdk $ cd operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check out the desired release branch:
git checkout master
$ git checkout master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Compile and install the SDK CLI:
make dep make install
$ make dep $ make install
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This installs the CLI binary
operator-sdk
at $GOPATH/bin.Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Building an Ansible-based Operator using the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
This procedure walks through an example of building a simple Memcached Operator powered by Ansible playbooks and modules using tools and libraries provided by the Operator SDK.
Prerequisites
- Operator SDK CLI installed on the development workstation
-
Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.4) using an account with
cluster-admin
permissions -
OpenShift CLI (
oc
) v4.1+ installed -
ansible
v2.6.0+ -
ansible-runner
v1.1.0+ -
ansible-runner-http
v1.0.0+
Procedure
Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.
To create a new Ansible-based, namespace-scoped
memcached-operator
project and change to its directory, use the following commands:operator-sdk new memcached-operator \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached \ --type=ansible cd memcached-operator
$ operator-sdk new memcached-operator \ --api-version=cache.example.com/v1alpha1 \ --kind=Memcached \ --type=ansible $ cd memcached-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This creates the
memcached-operator
project specifically for watching the Memcached resource with APIVersionexample.com/v1apha1
and KindMemcached
.Customize the Operator logic.
For this example, the
memcached-operator
executes the following reconciliation logic for eachMemcached
Custom Resource (CR):-
Create a
memcached
Deployment if it does not exist. -
Ensure that the Deployment size is the same as specified by the
Memcached
CR.
By default, the
memcached-operator
watchesMemcached
resource events as shown in thewatches.yaml
file and executes the Ansible roleMemcached
:- version: v1alpha1 group: cache.example.com kind: Memcached
- version: v1alpha1 group: cache.example.com kind: Memcached
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can optionally customize the following logic in the
watches.yaml
file:Specifying a
role
option configures the Operator to use this specified path when launchingansible-runner
with an Ansible role. By default, the new command fills in an absolute path to where your role should go:- version: v1alpha1 group: cache.example.com kind: Memcached role: /opt/ansible/roles/memcached
- version: v1alpha1 group: cache.example.com kind: Memcached role: /opt/ansible/roles/memcached
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specifying a
playbook
option in thewatches.yaml
file configures the Operator to use this specified path when launchingansible-runner
with an Ansible playbook:- version: v1alpha1 group: cache.example.com kind: Memcached playbook: /opt/ansible/playbook.yaml
- version: v1alpha1 group: cache.example.com kind: Memcached playbook: /opt/ansible/playbook.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Create a
Build the Memcached Ansible role.
Modify the generated Ansible role under the
roles/memcached/
directory. This Ansible role controls the logic that is executed when a resource is modified.Define the
Memcached
spec.Defining the spec for an Ansible-based Operator can be done entirely in Ansible. The Ansible Operator passes all key-value pairs listed in the CR spec field along to Ansible as variables. The names of all variables in the spec field are converted to snake case (lowercase with an underscore) by the Operator before running Ansible. For example,
serviceAccount
in the spec becomesservice_account
in Ansible.TipYou should perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.
In case the user does not set the
spec
field, set a default by modifying theroles/memcached/defaults/main.yml
file:size: 1
size: 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the
Memcached
Deployment.With the
Memcached
spec now defined, you can define what Ansible is actually executed on resource changes. Because this is an Ansible role, the default behavior executes the tasks in theroles/memcached/tasks/main.yml
file.The goal is for Ansible to create a Deployment if it does not exist, which runs the
memcached:1.4.36-alpine
image. Ansible 2.7+ supports the k8s Ansible module, which this example leverages to control the Deployment definition.Modify the
roles/memcached/tasks/main.yml
to match the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example used the
size
variable to control the number of replicas of theMemcached
Deployment. This example sets the default to1
, but any user can create a CR that overwrites the default.
Deploy the CRD.
Before running the Operator, Kubernetes needs to know about the new Custom Resource Definition (CRD) the Operator will be watching. Deploy the
Memcached
CRD:oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
$ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build and run the Operator.
There are two ways to build and run the Operator:
- As a Pod inside a Kubernetes cluster.
-
As a Go program outside the cluster using the
operator-sdk up
command.
Choose one of the following methods:
Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.
Build the
memcached-operator
image and push it to a registry:operator-sdk build quay.io/example/memcached-operator:v0.0.1 podman push quay.io/example/memcached-operator:v0.0.1
$ operator-sdk build quay.io/example/memcached-operator:v0.0.1 $ podman push quay.io/example/memcached-operator:v0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deployment manifests are generated in the
deploy/operator.yaml
file. The deployment image in this file needs to be modified from the placeholderREPLACE_IMAGE
to the previous built image. To do this, run:sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
memcached-operator
:oc create -f deploy/service_account.yaml oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml oc create -f deploy/operator.yaml
$ oc create -f deploy/service_account.yaml $ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml $ oc create -f deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
memcached-operator
is up and running:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE memcached-operator 1 1 1 1 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.
Ensure that Ansible Runner and Ansible Runner HTTP Plug-in are installed or else you will see unexpected errors from Ansible Runner when a CR is created.
It is also important that the role path referenced in the
watches.yaml
file exists on your machine. Because normally a container is used where the role is put on disk, the role must be manually copied to the configured Ansible roles path (for example/etc/ansible/roles
).To run the Operator locally with the default Kubernetes configuration file present at
$HOME/.kube/config
:operator-sdk run --local
$ operator-sdk run --local
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To run the Operator locally with a provided Kubernetes configuration file:
operator-sdk run --local --kubeconfig=config
$ operator-sdk run --local --kubeconfig=config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Memcached
CR.Modify the
deploy/crds/cache_v1alpha1_memcached_cr.yaml
file as shown and create aMemcached
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
memcached-operator
creates the Deployment for the CR:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE memcached-operator 1 1 1 1 2m example-memcached 3 3 3 3 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods to confirm three replicas were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Update the size.
Change the
spec.size
field in thememcached
CR from3
to4
and apply the change:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the Deployment size:
oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-memcached 4 4 4 4 5m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Clean up the resources:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.4. Managing application lifecycle using the k8s Ansible module Copia collegamentoCollegamento copiato negli appunti!
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the k8s
Ansible module. This Ansible module allows a developer to either leverage their existing Kubernetes resource files (written in YAML) or express the lifecycle management in native Ansible.
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the k8s
Ansible module. To get started, install the module on your local workstation and test it using a playbook before moving on to using it within an Operator.
12.2.4.1. Installing the k8s Ansible module Copia collegamentoCollegamento copiato negli appunti!
To install the k8s
Ansible module on your local workstation:
Procedure
Install Ansible 2.6+:
sudo yum install ansible
$ sudo yum install ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the OpenShift python client package using
pip
:pip install openshift
$ pip install openshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.4.2. Testing the k8s Ansible module locally Copia collegamentoCollegamento copiato negli appunti!
Sometimes, it is beneficial for a developer to run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Procedure
Initialize a new Ansible-based Operator project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd foo-operator
$ cd foo-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
roles/foo/tasks/main.yml
file with the desired Ansible logic. This example creates and deletes a namespace with the switch of a variable.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Setting
ignore_errors: true
ensures that deleting a nonexistent project does not fail.
Modify the
roles/foo/defaults/main.yml
file to setstate
topresent
by default.state: present
state: present
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Ansible playbook
playbook.yml
in the top-level directory, which includes theFoo
role:- hosts: localhost roles: - Foo
- hosts: localhost roles: - Foo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the namespace was created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Rerun the playbook setting
state
toabsent
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the namespace was deleted:
oc get namespace
$ oc get namespace NAME STATUS AGE default Active 28d kube-public Active 28d kube-system Active 28d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.4.3. Testing the k8s Ansible module inside an Operator Copia collegamentoCollegamento copiato negli appunti!
After you are familiar using the k8s
Ansible module locally, you can trigger the same Ansible logic inside of an Operator when a Custom Resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the Watches file.
12.2.4.3.1. Testing an Ansible-based Operator locally Copia collegamentoCollegamento copiato negli appunti!
After getting comfortable testing Ansible workflows locally, you can test the logic inside of an Ansible-based Operator running locally.
To do so, use the operator-sdk run --local
command from the top-level directory of your Operator project. This command reads from the ./watches.yaml
file and uses the ~/.kube/config
file to communicate with a Kubernetes cluster just as the k8s
Ansible module does.
Procedure
Because the
run --local
command reads from the./watches.yaml
file, there are options available to the Operator author. Ifrole
is left alone (by default,/opt/ansible/roles/<name>
) you must copy the role over to the/opt/ansible/roles/
directory from the Operator directly.This is cumbersome because changes are not reflected from the current directory. Instead, change the
role
field to point to the current directory and comment out the existing line:- version: v1alpha1 group: foo.example.com kind: Foo # role: /opt/ansible/roles/Foo role: /home/user/foo-operator/Foo
- version: v1alpha1 group: foo.example.com kind: Foo # role: /opt/ansible/roles/Foo role: /home/user/foo-operator/Foo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Custom Resource Definiton (CRD) and proper role-based access control (RBAC) definitions for the Custom Resource (CR)
Foo
. Theoperator-sdk
command autogenerates these files inside of thedeploy/
directory:oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml oc create -f deploy/service_account.yaml oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml
$ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml $ oc create -f deploy/service_account.yaml $ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
run --local
command:operator-sdk run --local
$ operator-sdk run --local [...] INFO[0000] Starting to serve on 127.0.0.1:8888 INFO[0000] Watching foo.example.com/v1alpha1, Foo, default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now that the Operator is watching the resource
Foo
for events, the creation of a CR triggers your Ansible role to execute. View thedeploy/cr.yaml
file:apiVersion: "foo.example.com/v1alpha1" kind: "Foo" metadata: name: "example"
apiVersion: "foo.example.com/v1alpha1" kind: "Foo" metadata: name: "example"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the
spec
field is not set, Ansible is invoked with no extra variables. The next section covers how extra variables are passed from a CR to Ansible. This is why it is important to set sane defaults for the Operator.Create a CR instance of
Foo
with the default variablestate
set topresent
:oc create -f deploy/cr.yaml
$ oc create -f deploy/cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the namespace
test
was created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
deploy/cr.yaml
file to set thestate
field toabsent
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes and confirm that the namespace is deleted:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.4.3.2. Testing an Ansible-based Operator on a cluster Copia collegamentoCollegamento copiato negli appunti!
After getting familiar running Ansible logic inside of an Ansible-based Operator locally, you can test the Operator inside of a Pod on a Kubernetes cluster, such as OpenShift Container Platform. Running as a Pod on a cluster is preferred for production use.
Procedure
Build the
foo-operator
image and push it to a registry:operator-sdk build quay.io/example/foo-operator:v0.0.1 podman push quay.io/example/foo-operator:v0.0.1
$ operator-sdk build quay.io/example/foo-operator:v0.0.1 $ podman push quay.io/example/foo-operator:v0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deployment manifests are generated in the
deploy/operator.yaml
file. The Deployment image in this file must be modified from the placeholderREPLACE_IMAGE
to the previously-built image. To do so, run the following command:sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are performing these steps on OSX, use the following command instead:
sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
foo-operator
:oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml # if CRD doesn't exist already oc create -f deploy/service_account.yaml oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml oc create -f deploy/operator.yaml
$ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml # if CRD doesn't exist already $ oc create -f deploy/service_account.yaml $ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml $ oc create -f deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
foo-operator
is up and running:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE foo-operator 1 1 1 1 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.5. Managing Custom Resource status using the operator_sdk.util Ansible collection Copia collegamentoCollegamento copiato negli appunti!
Ansible-based Operators automatically update Custom Resource (CR) status
subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:
Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status
Ansible module, which is included in the operator_sdk util collection. This allows the author to update the status
from within Ansible with any key-value pair as desired.
By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.
Procedure
To track CR status manually from your application, update the Watches file with a
manageStatus
field set tofalse
:- version: v1 group: api.example.com kind: Foo role: Foo manageStatus: false
- version: v1 group: api.example.com kind: Foo role: Foo manageStatus: false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then, use the
operator_sdk.util.k8s_status
Ansible module to update the subresource. For example, to update with keyfoo
and valuebar
,operator_sdk.util
can be used as shown:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Collections can also be declared in the role’s meta/main.yml
, which is included for new scaffolded Ansible Operators.
collections: - operator_sdk.util
collections:
- operator_sdk.util
Declaring collections in the role meta allows you to invoke the k8s_status
module directly:
+
k8s_status: <snip> status: foo: bar
k8s_status:
<snip>
status:
foo: bar
Additional resources
- For more details about user-driven status management from Ansible-based Operators, see the Ansible-based Operator Status Proposal for Operator SDK.
12.2.6. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- See Appendices to learn about the project directory structures created by the Operator SDK.
- Reaching for the Stars with Ansible Operator - Red Hat OpenShift Blog
- Operator Development Guide for Red Hat Partners
12.3. Creating Helm-based Operators Copia collegamentoCollegamento copiato negli appunti!
This guide outlines Helm chart support in the Operator SDK and walks Operator authors through an example of building and running an Nginx Operator with the operator-sdk
CLI tool that uses an existing Helm chart.
12.3.1. Helm chart support in the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.
One of the Operator SDK’s options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.
The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the object’s spec field is a list of configuration options that are typically described in Helm’s values.yaml
file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml
), you can express them within a Custom Resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.
For an example of a simple CR called Tomcat
:
The replicaCount
value, 2
in this case, is propagated into the chart’s templates where following is used:
{{ .Values.replicaCount }}
{{ .Values.replicaCount }}
After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc
command:
oc get Tomcats --all-namespaces
$ oc get Tomcats --all-namespaces
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a Custom Resource Definition (CRD). And because it obeys RBAC, you can more easily prevent production changes.
12.3.2. Installing the Operator SDK CLI Copia collegamentoCollegamento copiato negli appunti!
The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.
12.3.2.1. Installing from GitHub release Copia collegamentoCollegamento copiato negli appunti!
You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.
Prerequisites
- Go v1.13+
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Set the release version variable:
RELEASE_VERSION=v0.15.0
RELEASE_VERSION=v0.15.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the release binary.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the downloaded release binary.
Download the provided ASC file.
For Linux:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:
For Linux:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you do not have the maintainer’s public key on your workstation, you will get the following error:
gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' gpg: Signature made Fri Apr 5 20:03:22 2019 CEST gpg: using RSA key <key_id> gpg: Can't check signature: No public key
$ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin' $ gpg: Signature made Fri Apr 5 20:03:22 2019 CEST $ gpg: using RSA key <key_id>
1 $ gpg: Can't check signature: No public key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- RSA key string.
To download the key, run the following command, replacing
<key_id>
with the RSA key string provided in the output of the previous command:gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
$ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>"
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you do not have a key server configured, specify one with the
--keyserver
option.
Install the release binary in your
PATH
:For Linux:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For macOS:
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
$ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.2.2. Installing from Homebrew Copia collegamentoCollegamento copiato negli appunti!
You can install the SDK CLI using Homebrew.
Prerequisites
- Homebrew
-
docker
v17.03+,podman
v1.2.0+, orbuildah
v1.7+ -
OpenShift CLI (
oc
) 4.4+ installed - Access to a cluster based on Kubernetes v1.12.0+
- Access to a container registry
Procedure
Install the SDK CLI using the
brew
command:brew install operator-sdk
$ brew install operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.2.3. Compiling and installing from source Copia collegamentoCollegamento copiato negli appunti!
You can obtain the Operator SDK source code to compile and install the SDK CLI.
Prerequisites
Procedure
Clone the
operator-sdk
repository:mkdir -p $GOPATH/src/github.com/operator-framework cd $GOPATH/src/github.com/operator-framework git clone https://github.com/operator-framework/operator-sdk cd operator-sdk
$ mkdir -p $GOPATH/src/github.com/operator-framework $ cd $GOPATH/src/github.com/operator-framework $ git clone https://github.com/operator-framework/operator-sdk $ cd operator-sdk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check out the desired release branch:
git checkout master
$ git checkout master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Compile and install the SDK CLI:
make dep make install
$ make dep $ make install
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This installs the CLI binary
operator-sdk
at $GOPATH/bin.Verify that the CLI tool was installed correctly:
operator-sdk version
$ operator-sdk version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.3. Building a Helm-based Operator using the Operator SDK Copia collegamentoCollegamento copiato negli appunti!
This procedure walks through an example of building a simple Nginx Operator powered by a Helm chart using tools and libraries provided by the Operator SDK.
It is best practice to build a new Operator for each chart. This can allow for more native-behaving Kubernetes APIs (for example, oc get Nginx
) and flexibility if you ever want to write a fully-fledged Operator in Go, migrating away from a Helm-based Operator.
Prerequisites
- Operator SDK CLI installed on the development workstation
-
Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.4) using an account with
cluster-admin
permissions -
OpenShift CLI (
oc
) v4.1+ installed
Procedure
Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.
To create a new Helm-based, namespace-scoped
nginx-operator
project, use the following command:operator-sdk new nginx-operator \ --api-version=example.com/v1alpha1 \ --kind=Nginx \ --type=helm cd nginx-operator
$ operator-sdk new nginx-operator \ --api-version=example.com/v1alpha1 \ --kind=Nginx \ --type=helm $ cd nginx-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This creates the
nginx-operator
project specifically for watching the Nginx resource with APIVersionexample.com/v1apha1
and KindNginx
.Customize the Operator logic.
For this example, the
nginx-operator
executes the following reconciliation logic for eachNginx
Custom Resource (CR):- Create a Nginx Deployment if it does not exist.
- Create a Nginx Service if it does not exist.
- Create a Nginx Ingress if it is enabled and does not exist.
- Ensure that the Deployment, Service, and optional Ingress match the desired configuration (for example, replica count, image, service type) as specified by the Nginx CR.
By default, the
nginx-operator
watchesNginx
resource events as shown in thewatches.yaml
file and executes Helm releases using the specified chart:- version: v1alpha1 group: example.com kind: Nginx chart: /opt/helm/helm-charts/nginx
- version: v1alpha1 group: example.com kind: Nginx chart: /opt/helm/helm-charts/nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the Nginx Helm chart.
When a Helm Operator project is created, the Operator SDK creates an example Helm chart that contains a set of templates for a simple Nginx release.
For this example, templates are available for Deployment, Service, and Ingress resources, along with a
NOTES.txt
template, which Helm chart developers use to convey helpful information about a release.If you are not already familiar with Helm Charts, take a moment to review the Helm Chart developer documentation.
Understand the Nginx CR spec.
Helm uses a concept called values to provide customizations to a Helm chart’s defaults, which are defined in the Helm chart’s
values.yaml
file.Override these defaults by setting the desired values in the CR spec. You can use the number of replicas as an example:
First, inspect the
helm-charts/nginx/values.yaml
file to find that the chart has a value calledreplicaCount
and it is set to1
by default. To have 2 Nginx instances in your deployment, your CR spec must containreplicaCount: 2
.Update the
deploy/crds/example.com_v1alpha1_nginx_cr.yaml
file to look like the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Similarly, the default service port is set to
80
. To instead use8080
, update thedeploy/crds/example.com_v1alpha1_nginx_cr.yaml
file again by adding the service port override:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Helm Operator applies the entire spec as if it was the contents of a values file, just like the
helm install -f ./overrides.yaml
command works.
Deploy the CRD.
Before running the Operator, Kubernetes needs to know about the new custom resource definition (CRD) the operator will be watching. Deploy the following CRD:
oc create -f deploy/crds/example_v1alpha1_nginx_crd.yaml
$ oc create -f deploy/crds/example_v1alpha1_nginx_crd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build and run the Operator.
There are two ways to build and run the Operator:
- As a Pod inside a Kubernetes cluster.
-
As a Go program outside the cluster using the
operator-sdk up
command.
Choose one of the following methods:
Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.
Build the
nginx-operator
image and push it to a registry:operator-sdk build quay.io/example/nginx-operator:v0.0.1 podman push quay.io/example/nginx-operator:v0.0.1
$ operator-sdk build quay.io/example/nginx-operator:v0.0.1 $ podman push quay.io/example/nginx-operator:v0.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deployment manifests are generated in the
deploy/operator.yaml
file. The deployment image in this file needs to be modified from the placeholderREPLACE_IMAGE
to the previous built image. To do this, run:sed -i 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
nginx-operator
:oc create -f deploy/service_account.yaml oc create -f deploy/role.yaml oc create -f deploy/role_binding.yaml oc create -f deploy/operator.yaml
$ oc create -f deploy/service_account.yaml $ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml $ oc create -f deploy/operator.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nginx-operator
is up and running:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-operator 1 1 1 1 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.
It is important that the chart path referenced in the
watches.yaml
file exists on your machine. By default, thewatches.yaml
file is scaffolded to work with an Operator image built with theoperator-sdk build
command. When developing and testing your operator with theoperator-sdk run --local
command, the SDK looks in your local file system for this path.Create a symlink at this location to point to your Helm chart’s path:
sudo mkdir -p /opt/helm/helm-charts sudo ln -s $PWD/helm-charts/nginx /opt/helm/helm-charts/nginx
$ sudo mkdir -p /opt/helm/helm-charts $ sudo ln -s $PWD/helm-charts/nginx /opt/helm/helm-charts/nginx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To run the Operator locally with the default Kubernetes configuration file present at
$HOME/.kube/config
:operator-sdk run --local
$ operator-sdk run --local
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To run the Operator locally with a provided Kubernetes configuration file:
operator-sdk run --local --kubeconfig=<path_to_config>
$ operator-sdk run --local --kubeconfig=<path_to_config>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the
Nginx
CR.Apply the
Nginx
CR that you modified earlier:oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml
$ oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
nginx-operator
creates the Deployment for the CR:oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-nginx-b9phnoz9spckcrua7ihrbkrt1 2 2 2 2 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the pods to confirm two replicas were created:
oc get pods
$ oc get pods NAME READY STATUS RESTARTS AGE example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-fjcr9 1/1 Running 0 1m example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-ljbzl 1/1 Running 0 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the Service port is set to
8080
:oc get service
$ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-nginx-b9phnoz9spckcrua7ihrbkrt1 ClusterIP 10.96.26.3 <none> 8080/TCP 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
replicaCount
and remove the port.Change the
spec.replicaCount
field from2
to3
, remove thespec.service
field, and apply the change:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the Operator changes the Deployment size:
oc get deployment
$ oc get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-nginx-b9phnoz9spckcrua7ihrbkrt1 3 3 3 3 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the Service port is set to the default
80
:oc get service
$ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-nginx-b9phnoz9spckcrua7ihrbkrt1 ClusterIP 10.96.26.3 <none> 80/TCP 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the resources:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3.4. Additional resources Copia collegamentoCollegamento copiato negli appunti!
- See Appendices to learn about the project directory structures created by the Operator SDK.
- Operator Development Guide for Red Hat Partners
12.4. Generating a ClusterServiceVersion (CSV) Copia collegamentoCollegamento copiato negli appunti!
A ClusterServiceVersion (CSV) is a YAML manifest created from Operator metadata that assists the Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information like its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which Custom Resources (CRs) it manages or depends on.
The Operator SDK includes the generate csv
subcommand to generate a ClusterServiceVersion (CSV) for the current Operator project customized using information contained in manually-defined YAML manifests and Operator source files.
A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.
The CSV version is the same as the Operator’s, and a new CSV is generated when upgrading Operator versions. Operator authors can use the --csv-version
flag to have their Operators' state encapsulated in a CSV with the supplied semantic version:
operator-sdk generate csv --csv-version <version>
$ operator-sdk generate csv --csv-version <version>
This action is idempotent and only updates the CSV file when a new version is supplied, or a YAML manifest or source file is changed. Operator authors should not have to directly modify most fields in a CSV manifest. Those that require modification are defined in this guide. For example, the CSV version must be included in metadata.name
.
12.4.1. How CSV generation works Copia collegamentoCollegamento copiato negli appunti!
An Operator project’s deploy/
directory is the standard location for all manifests required to deploy an Operator. The Operator SDK can use data from manifests in deploy/
to write a CSV. The following command:
operator-sdk generate csv --csv-version <version>
$ operator-sdk generate csv --csv-version <version>
writes a CSV YAML file to the deploy/olm-catalog/
directory by default.
Exactly three types of manifests are required to generate a CSV:
-
operator.yaml
-
*_{crd,cr}.yaml
-
RBAC role files, for example
role.yaml
Operator authors may have different versioning requirements for these files and can configure which specific files are included in the deploy/olm-catalog/csv-config.yaml
file.
Workflow
Depending on whether an existing CSV is detected, and assuming all configuration defaults are used, the generate csv
subcommand either:
Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.
-
The update mechanism checks for an existing CSV in
deploy/
. When one is not found, it creates a ClusterServiceVersion object, referred to here as a cache, and populates fields easily derived from Operator metadata, such as Kubernetes APIObjectMeta
. -
The update mechanism searches
deploy/
for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data. - After the search completes, every cache field populated is written back to a CSV YAML file.
-
The update mechanism checks for an existing CSV in
or:
Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.
-
The update mechanism checks for an existing CSV in
deploy/
. When one is found, the CSV YAML file contents are marshaled into a ClusterServiceVersion cache. -
The update mechanism searches
deploy/
for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data. - After the search completes, every cache field populated is written back to a CSV YAML file.
-
The update mechanism checks for an existing CSV in
Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved.
12.4.2. CSV composition configuration Copia collegamentoCollegamento copiato negli appunti!
Operator authors can configure CSV composition by populating several fields in the deploy/olm-catalog/csv-config.yaml
file:
Field | Description |
---|---|
|
The Operator resource manifest file path. Defaults to |
|
A list of CRD and CR manifest file paths. Defaults to |
|
A list of RBAC role manifest file paths. Defaults to |
12.4.3. Manually-defined CSV fields Copia collegamentoCollegamento copiato negli appunti!
Many CSV fields cannot be populated using generated, non-SDK-specific manifests. These fields are mostly human-written, English metadata about the Operator and various Custom Resource Definitions (CRDs).
Operator authors must directly modify their CSV YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning CSV generation when a lack of data in any of the required fields is detected.
Field | Description |
---|---|
|
A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example |
|
The Operator’s capability level according to the Operator maturity model. Options include |
| A public name to identify the Operator. |
| A short description of the Operator’s functionality. |
| Keywords describing the operator. |
|
Human or organizational entities maintaining the Operator, with a |
|
The Operators' provider (usually an organization), with a |
| Key-value pairs to be used by Operator internals. |
|
Semantic version of the Operator, for example |
|
Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in
|
Field | Description |
---|---|
| The name of the CSV being replaced by this CSV. |
|
URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a |
| Selectors by which the Operator can pair resources in a cluster. |
|
A base64-encoded icon unique to the Operator, set in a |
|
The level of maturity the software has achieved at this version. Options include |
Further details on what data each field above should hold are found in the CSV spec.
Several YAML fields currently requiring user intervention can potentially be parsed from Operator code; such Operator SDK functionality will be addressed in a future design document.
Additional resources
12.4.4. Generating a CSV Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- An Operator project generated using the Operator SDK
Procedure
-
In your Operator project, configure your CSV composition by modifying the
deploy/olm-catalog/csv-config.yaml
file, if desired. Generate the CSV:
operator-sdk generate csv --csv-version <version>
$ operator-sdk generate csv --csv-version <version>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the new CSV generated in the
deploy/olm-catalog/
directory, ensure all required, manually-defined fields are set appropriately.
12.4.5. Enabling your Operator for restricted network environments Copia collegamentoCollegamento copiato negli appunti!
As an Operator author, your CSV must meet the following additional requirements for your Operator to run properly in a restricted network environment:
- List any related images, or other container images that your Operator might require to perform their functions.
- Reference all specified images by a digest (SHA) and not by a tag.
You must use SHA references to related images in two places in the Operator’s CSV:
in
spec.relatedImages
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow in the
env
section of the Operators Deployments when declaring environment variables that inject the image that the Operator should use:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.6. Enabling your Operator for multiple architectures and operating systems Copia collegamentoCollegamento copiato negli appunti!
Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.
If your Operator supports variants other than AMD64 and Linux, you can add labels to the CSV that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:
labels: operatorframework.io/arch.<arch>: supported operatorframework.io/os.<os>: supported
labels:
operatorframework.io/arch.<arch>: supported
operatorframework.io/os.<os>: supported
Only the labels on the channel head of the default channel are considered for filtering PackageManifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.
If a CSV does not include an os
label, it is treated as if it has the following Linux support label by default:
labels: operatorframework.io/os.linux: supported
labels:
operatorframework.io/os.linux: supported
If a CSV does not include an arch
label, it is treated as if it has the following AMD64 support label by default:
labels: operatorframework.io/arch.amd64: supported
labels:
operatorframework.io/arch.amd64: supported
If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.
Prerequisites
- An Operator project with a CSV.
- To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
- For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.
Procedure
Add a label in your CSV’s
metadata.labels
for each supported architecture and operating system that your Operator supports:labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported operatorframework.io/arch.amd64: supported
labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported
1 operatorframework.io/arch.amd64: supported
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- See the Image Manifest V 2, Schema 2 specification for more information on manifest lists.
12.4.6.1. Architecture and operating system support for Operators Copia collegamentoCollegamento copiato negli appunti!
The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:
Architecture | String |
---|---|
AMD64 |
|
64-bit PowerPC little-endian |
|
IBM Z |
|
Operating system | String |
---|---|
Linux |
|
z/OS |
|
Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.
12.4.7. Setting a suggested namespace Copia collegamentoCollegamento copiato negli appunti!
Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, in order to work properly. If resolved from a Subscription, OLM defaults the namespaced resources of an Operator to the namespace of its Subscription.
As an Operator author, you can instead express a desired target namespace as part of your CSV to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.
Procedure
In your CSV, set the
operatorframework.io/suggested-namespace
annotation to your suggested namespace:metadata: annotations: operatorframework.io/suggested-namespace: <namespace>
metadata: annotations: operatorframework.io/suggested-namespace: <namespace>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set your suggested namespace.
12.4.8. Understanding your Custom Resource Definitions (CRDs) Copia collegamentoCollegamento copiato negli appunti!
There are two types of Custom Resource Definitions (CRDs) that your Operator may use: ones that are owned by it and ones that it depends on, which are required.
12.4.8.1. Owned CRDs Copia collegamentoCollegamento copiato negli appunti!
The CRDs owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.
It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of ReplicaSets in another. Each one should be listed out in the CSV file.
Field | Description | Required/Optional |
---|---|---|
| The full name of your CRD. | Required |
| The version of that object API. | Required |
| The machine readable name of your CRD. | Required |
|
A human readable version of your CRD name, for example | Required |
| A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD. | Required |
|
The API group that this CRD belongs to, for example | Optional |
| Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here. | Optional |
| These Descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a Secret or ConfigMap that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs. There are three types of descriptors:
All Descriptors accept the following fields:
Also see the openshift/console project for more information on Descriptors in general. | Optional |
The following example depicts a MongoDB Standalone
CRD that requires some user input in the form of a Secret and ConfigMap, and orchestrates Services, StatefulSets, pods and ConfigMaps:
Example owned CRD
12.4.8.2. Required CRDs Copia collegamentoCollegamento copiato negli appunti!
Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.
An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.
The Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a Service Account created for each Operator to create, watch, and modify the Kubernetes resources required.
Field | Description | Required/Optional |
---|---|---|
| The full name of the CRD you require. | Required |
| The version of that object API. | Required |
| The Kubernetes object kind. | Required |
| A human readable version of the CRD. | Required |
| A summary of how the component fits in your larger architecture. | Required |
Example required CRD
12.4.8.3. CRD templates Copia collegamentoCollegamento copiato negli appunti!
Users of your Operator will need to be aware of which options are required versus optional. You can provide templates for each of your Custom Resource Definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples
. Compatible UIs will pre-fill this template for users to further customize.
The annotation consists of a list of the kind
, for example, the CRD name and the corresponding metadata
and spec
of the Kubernetes object.
The following full example provides templates for EtcdCluster
, EtcdBackup
and EtcdRestore
:
metadata: annotations: alm-examples: >- [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
metadata:
annotations:
alm-examples: >-
[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]
12.4.8.4. Hiding internal objects Copia collegamentoCollegamento copiato negli appunti!
It is common practice for Operators to use Custom Resource Definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true
.
If any CRDs are not meant for manipulation by users, they can be hidden in the user interface using the operators.operatorframework.io/internal-objects
annotation in the Operator’s ClusterServiceVersion (CSV):
Internal object annotation
- 1
- Set any internal CRDs as an array of strings.
Before marking one of your CRDs as internal, make sure that any debugging information or configuration that might be required to manage the application is reflected on the CR’s status or spec
block, if applicable to your Operator.
12.4.9. Understanding your API services Copia collegamentoCollegamento copiato negli appunti!
As with CRDs, there are two types of APIServices that your Operator may use: owned and required.
12.4.9.1. Owned APIServices Copia collegamentoCollegamento copiato negli appunti!
When a CSV owns an APIService, it is responsible for describing the deployment of the extension api-server
that backs it and the group-version-kinds
it provides.
An APIService is uniquely identified by the group-version
it provides and can be listed multiple times to denote the different kinds it is expected to provide.
Field | Description | Required/Optional |
---|---|---|
|
Group that the APIService provides, for example | Required |
|
Version of the APIService, for example | Required |
| A kind that the APIService is expected to provide. | Required |
| The plural name for the APIService provided | Required |
| Name of the deployment defined by your CSV that corresponds to your APIService (required for owned APIServices). During the CSV pending phase, the OLM Operator searches your CSV’s InstallStrategy for a deployment spec with a matching name, and if not found, does not transition the CSV to the install ready phase. | Required |
|
A human readable version of your APIService name, for example | Required |
| A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService. | Required |
| Your APIServices own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database. It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here. | Optional |
| Essentially the same as for owned CRDs. | Optional |
12.4.9.1.1. APIService Resource Creation Copia collegamentoCollegamento copiato negli appunti!
The Operator Lifecycle Manager (OLM) is responsible for creating or replacing the Service and APIService resources for each unique owned APIService:
-
Service Pod selectors are copied from the CSV deployment matching the APIServiceDescription’s
DeploymentName
. - A new CA key/cert pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective APIService resource.
12.4.9.1.2. APIService Serving Certs Copia collegamentoCollegamento copiato negli appunti!
The OLM handles generating a serving key/cert pair whenever an owned APIService is being installed. The serving certificate has a CN containing the host name of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding APIService resource.
The cert is stored as a type kubernetes.io/tls
Secret in the deployment namespace, and a Volume named apiservice-cert
is automatically appended to the Volumes section of the deployment in the CSV matching the APIServiceDescription’s DeploymentName
field.
If one does not already exist, a VolumeMount with a matching name is also appended to all containers of that deployment. This allows users to define a VolumeMount with the expected name to accommodate any custom path requirements. The generated VolumeMount’s path defaults to /apiserver.local.config/certificates
and any existing VolumeMounts with the same path are replaced.
12.4.9.2. Required APIServices Copia collegamentoCollegamento copiato negli appunti!
The OLM ensures all required CSVs have an APIService that is available and all expected group-version-kinds
are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by APIServices it does not own.
Field | Description | Required/Optional |
---|---|---|
|
Group that the APIService provides, for example | Required |
|
Version of the APIService, for example | Required |
| A kind that the APIService is expected to provide. | Required |
|
A human readable version of your APIService name, for example | Required |
| A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService. | Required |
12.5. Validating Operators using the scorecard Copia collegamentoCollegamento copiato negli appunti!
Operator authors should validate that their Operator is packaged correctly and free of syntax errors. As an Operator author, you can use the Operator SDK’s scorecard tool to validate your Operator packaging and run tests.
OpenShift Container Platform 4.4 supports Operator SDK v0.15.0.
12.5.1. About the scorecard tool Copia collegamentoCollegamento copiato negli appunti!
To validate an Operator, the Operator SDK’s scorecard tool begins by creating all resources required by any related Custom Resources (CRs) and the Operator. The scorecard then creates a proxy container in the Operator’s Deployment which is used to record calls to the API server and run some of the tests. The tests performed also examine some of the parameters in the CRs.
12.5.2. Scorecard configuration Copia collegamentoCollegamento copiato negli appunti!
The scorecard tool uses a configuration file that allows you to configure internal plug-ins, as well as several global configuration options.
12.5.2.1. Configuration file Copia collegamentoCollegamento copiato negli appunti!
The default location for the scorecard tool’s configuration is the <project_dir>/.osdk-scorecard.*
. The following is an example of a YAML-formatted configuration file:
Scorecard configuration file
Configuration methods for global options take the following priority, highest to lowest:
Command arguments (if available)
The configuration file must be in YAML format. As the configuration file might be extended to allow configuration of all operator-sdk
subcommands in the future, the scorecard’s configuration must be under a scorecard
subsection.
Configuration file support is provided by the viper
package. For more info on how viper configuration works, see the viper
package README.
12.5.2.2. Command arguments Copia collegamentoCollegamento copiato negli appunti!
While most of the scorecard tool’s configuration is done using a configuration file, you can also use the following arguments:
Flag | Type | Description |
---|---|---|
| string | The path to a bundle directory used for the bundle validation test. |
| string |
The path to the scorecard configuration file. The default is |
| string |
Output format. Valid options are |
| string |
The path to the |
| string |
The version of scorecard to run. The default and only valid option is |
| string | The label selector to filter tests on. |
| bool |
If |
12.5.2.3. Configuration file options Copia collegamentoCollegamento copiato negli appunti!
The scorecard configuration file provides the following options:
Option | Type | Description |
---|---|---|
| string |
Equivalent of the |
| string |
Equivalent of the |
| string |
Equivalent of the |
| array | An array of plug-in names. |
12.5.2.3.1. Basic and OLM plug-ins Copia collegamentoCollegamento copiato negli appunti!
The scorecard supports the internal basic
and olm
plug-ins, which are configured by a plugins
section in the configuration file.
Option | Type | Description |
---|---|---|
| []string |
The path(s) for CRs being tested. Required if |
| string |
The path to the CSV for the Operator. Required for OLM tests or if |
| bool | Indicates that the CSV and relevant CRDs have been deployed onto the cluster by OLM. |
| string |
The path to the |
| string |
The namespace to run the plug-ins in. If unset, the default specified by the |
| int | Time in seconds until a timeout during initialization of the Operator. |
| string | The path to the directory containing CRDs that must be deployed to the cluster. |
| string |
The manifest file with all resources that run within a namespace. By default, the scorecard combines the |
| string |
The manifest containing required resources that run globally (not namespaced). By default, the scorecard combines all CRDs in the |
Currently, using the scorecard with a CSV does not permit multiple CR manifests to be set through the CLI, configuration file, or CSV annotations. You must tear down your Operator in the cluster, re-deploy, and re-run the scorecard for each CR that is tested.
Additional resources
-
You can either set
cr-manifest
or your CSV’smetadata.annotations['alm-examples']
to provide CRs to the scorecard, but not both. See CRD templates for details.
12.5.3. Tests performed Copia collegamentoCollegamento copiato negli appunti!
By default, the scorecard tool has eight internal tests it can run available across two internal plug-ins. If multiple CRs are specified for a plug-in, the test environment is fully cleaned up after each CR so that each CR gets a clean testing environment.
Each test has a short name that uniquely identifies the test. This is useful when selecting a specific test or tests to run. For example:
operator-sdk scorecard -o text --selector=test=checkspectest operator-sdk scorecard -o text --selector='test in (checkspectest,checkstatustest)'
$ operator-sdk scorecard -o text --selector=test=checkspectest
$ operator-sdk scorecard -o text --selector='test in (checkspectest,checkstatustest)'
12.5.3.1. Basic plug-in Copia collegamentoCollegamento copiato negli appunti!
The following basic Operator tests are available from the basic
plug-in:
Test | Description | Short name |
---|---|---|
Spec Block Exists |
This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a |
|
Status Block Exists |
This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a |
|
Writing Into CRs Has An Effect |
This test reads the scorecard proxy’s logs to verify that the Operator is making |
|
12.5.3.2. OLM plug-in Copia collegamentoCollegamento copiato negli appunti!
The following OLM integration tests are available from the olm
plug-in:
Test | Description | Short name |
---|---|---|
OLM Bundle Validation | This test validates the OLM bundle manifests found in the bundle directory as specified by the bundle flag. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
Provided APIs Have Validation |
This test verifies that the CRDs for the provided CRs contain a validation section and that there is validation for each |
|
Owned CRDs Have Resources Listed |
This test makes sure that the CRDs for each CR provided by the |
|
Spec Fields With Descriptors |
This test verifies that every field in the Custom Resources' |
|
Status Fields With Descriptors |
This test verifies that every field in the Custom Resources' |
|
Additional resources
12.5.4. Running the scorecard Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
The following prerequisites for the Operator project are checked by the scorecard tool:
- Access to a cluster running Kubernetes 1.11.3 or later.
-
If you want to use the scorecard to check the integration of your Operator project with Operator Lifecycle Manager (OLM), then a ClusterServiceVersion (CSV) file is also required. This is a requirement when the
olm-deployed
option is used. For Operators that were not generated using the Operator SDK (non-SDK Operators):
- Resource manifests for installing and configuring the Operator and CRs.
-
Configuration getter that supports reading from the
KUBECONFIG
environment variable, such as theclientcmd
orcontroller-runtime
configuration getters. This is required for the scorecard proxy to work correctly.
Procedure
-
Define a
.osdk-scorecard.yaml
configuration file in your Operator project. -
Create the namespace defined in the RBAC files (
role_binding
). Run the scorecard from the root directory of your Operator project:
operator-sdk scorecard
$ operator-sdk scorecard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The scorecard return code is
1
if any of the executed texts did not pass and0
if all selected tests passed.
12.5.5. Running the scorecard with an OLM-managed Operator Copia collegamentoCollegamento copiato negli appunti!
The scorecard can be run using a ClusterServiceVersion (CSV), providing a way to test cluster-ready and non-SDK Operators.
Procedure
The scorecard requires a proxy container in the Operator’s
Deployment
Pod to read Operator logs. A few modifications to your CSV and creation of one extra object are required to run the proxy before deploying your Operator with OLM.This step can be performed manually or automated using bash functions. Choose one of the following methods.
Manual method:
Create a proxy server secret containing a local Kubeconfig:
Generate a user name using the scorecard proxy’s namespaced owner reference.
echo '{"apiVersion":"","kind":"","name":"scorecard","uid":"","Namespace":"'<namespace>'"}' | base64 -w 0
$ echo '{"apiVersion":"","kind":"","name":"scorecard","uid":"","Namespace":"'<namespace>'"}' | base64 -w 0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<namespace>
with the namespace your Operator will deploy in.
Write a
Config
manifestscorecard-config.yaml
using the following template, replacing<username>
with the base64 user name generated in the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Encode the
Config
as base64:cat scorecard-config.yaml | base64 -w 0
$ cat scorecard-config.yaml | base64 -w 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secret
manifestscorecard-secret.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the secret:
oc apply -f scorecard-secret.yaml
$ oc apply -f scorecard-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert a volume referring to the
Secret
into the Operator’s Deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Scorecard
kubeconfig
volume.
Insert a volume mount and
KUBECONFIG
environment variable into each container in your Operator’s Deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the scorecard proxy container into the Operator’s Deployment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Scorecard proxy container.
Automated method:
The
community-operators
repository has several bash functions that can perform the previous steps in the procedure for you:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- After inserting the proxy container, follow the steps in the Getting started with the Operator SDK guide to bundle your CSV and CRDs and deploy your Operator on OLM.
-
After your Operator has been deployed on OLM, define a
.osdk-scorecard.yaml
configuration file in your Operator project and ensure both thecsv-path: <csv_manifest_path>
andolm-deployed
options are set. Run the scorecard with both the
csv-path: <csv_manifest_path>
andolm-deployed
options set in your scorecard configuration file:operator-sdk scorecard
$ operator-sdk scorecard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
12.6. Configuring built-in monitoring with Prometheus Copia collegamentoCollegamento copiato negli appunti!
This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.
12.6.1. Prometheus Operator support Copia collegamentoCollegamento copiato negli appunti!
Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.
Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.
12.6.2. Metrics helper Copia collegamentoCollegamento copiato negli appunti!
In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:
func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)
func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)
These metrics are inherited from the controller-runtime
library API. By default, the metrics are served on 0.0.0.0:8383/metrics
.
A Service object is created with the metrics port exposed, which can be then accessed by Prometheus. The Service object is garbage collected when the leader Pod’s root owner is deleted.
The following example is present in the cmd/manager/main.go
file in all Operators generated using the Operator SDK:
12.6.2.1. Modifying the metrics port Copia collegamentoCollegamento copiato negli appunti!
Operator authors can modify the port that metrics are exposed on.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
-
In the generated Operator’s
cmd/manager/main.go
file, change the value ofmetricsPort
in the linevar metricsPort int32 = 8383
.
12.6.3. ServiceMonitor resources Copia collegamentoCollegamento copiato negli appunti!
A ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator that discovers the Endpoints
in Service objects and configures Prometheus to monitor those pods.
In Go-based Operators generated using the Operator SDK, the GenerateServiceMonitor()
helper function can take a Service object and generate a ServiceMonitor Custom Resource (CR) based on it.
Additional resources
- See the Prometheus Operator documentation for more information about the ServiceMonitor CRD.
12.6.3.1. Creating ServiceMonitor resources Copia collegamentoCollegamento copiato negli appunti!
Operator authors can add Service target discovery of created monitoring Services using the metrics.CreateServiceMonitor()
helper function, which accepts the newly created Service.
Prerequisites
- Go-based Operator generated using the Operator SDK
- Kubernetes-based cluster with the Prometheus Operator deployed
Procedure
Add the
metrics.CreateServiceMonitor()
helper function to your Operator code:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.7. Configuring leader election Copia collegamentoCollegamento copiato negli appunti!
During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own trade-off:
-
Leader-for-life: The leader Pod only gives up leadership (using garbage collection) when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders (split brain). However, this method can be subject to a delay in electing a new leader. For example, when the leader Pod is on an unresponsive or partitioned node, the
pod-eviction-timeout
dictates how it takes for the leader Pod to be deleted from the node and step down (default5m
). See the Leader-for-life Go documentation for more. - Leader-with-lease: The leader Pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.
By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case,
The following examples illustrate how to use the two options.
12.7.1. Using Leader-for-life election Copia collegamentoCollegamento copiato negli appunti!
With the Leader-for-life election implementation, a call to leader.Become()
blocks the Operator as it retries until it can become the leader by creating the ConfigMap named memcached-operator-lock
:
If the Operator is not running inside a cluster, leader.Become()
simply returns without error to skip the leader election since it cannot detect the Operator’s namespace.
12.7.2. Using Leader-with-lease election Copia collegamentoCollegamento copiato negli appunti!
The Leader-with-lease implementation can be enabled using the Manager Options for leader election:
When the Operator is not running in a cluster, the Manager returns an error when starting since it cannot detect the Operator’s namespace in order to create the ConfigMap for leader election. You can override this namespace by setting the Manager’s LeaderElectionNamespace
option.
12.8. Operator SDK CLI reference Copia collegamentoCollegamento copiato negli appunti!
This guide documents the Operator SDK CLI commands and their syntax:
operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]
12.8.1. build Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk build
command compiles the code and builds the executables. After build
completes, the image is built locally in docker
. It must then be pushed to a remote registry.
Argument | Description |
---|---|
|
The container image to be built, e.g., |
Flag | Description |
---|---|
| Enable in-cluster testing by adding test binary to the image. |
|
Path of namespaced resources manifest for tests. Default: |
|
Location of tests. Default: |
| Usage help output. |
If --enable-tests
is set, the build
command also builds the testing binary, adds it to the container image, and generates a deploy/test-pod.yaml
file that allows a user to run the tests as a Pod on a cluster.
Example output
12.8.2. completion Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk completion
command generates shell completions to make issuing CLI commands quicker and easier.
Subcommand | Description |
---|---|
| Generate bash completions. |
| Generate zsh completions. |
Flag | Description |
---|---|
| Usage help output. |
Example output
operator-sdk completion bash bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh
$ operator-sdk completion bash
# bash completion for operator-sdk -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh
12.8.3. print-deps Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk print-deps
command prints the most recent Golang packages and versions required by Operators. It prints in columnar format by default.
Flag | Description |
---|---|
|
Print packages and versions in |
Example output
12.8.4. generate Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk generate
command invokes a specific generator to generate code as needed.
12.8.4.1. crds Copia collegamentoCollegamento copiato negli appunti!
The generate crds
subcommand generates CRDs or updates them if they exist, under deploy/crds/__crd.yaml
. OpenAPI V3 validation YAML is generated as a validation
object.
Flag | Description |
---|---|
|
CRD version to generate (default |
|
Help for |
Example output
operator-sdk generate crds tree deploy/crds
$ operator-sdk generate crds
$ tree deploy/crds
├── deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
└── deploy/crds/app.example.com_appservices_crd.yaml
12.8.4.2. csv Copia collegamentoCollegamento copiato negli appunti!
The csv
subcommand writes a Cluster Service Version (CSV) manifest for use with Operator Lifecycle Manager (OLM). It also optionally writes Custom Resource Definition (CRD) files to deploy/olm-catalog/<operator_name>/<csv_version>
.
Flag | Description |
---|---|
| The channel the CSV should be registered under in the package manifest. |
|
The path to the CSV configuration file. Default: |
| The semantic version of the CSV manifest. Required. |
|
Use the channel passed to |
| The semantic version of CSV manifest to use as a base for a new version. |
| The Operator name to use while generating the CSV. |
|
Updates CRD manifests in |
Example output
12.8.4.3. k8s Copia collegamentoCollegamento copiato negli appunti!
The k8s
subcommand runs the Kubernetes code-generators for all CRD APIs under pkg/apis/
. Currently, k8s
only runs deepcopy-gen
to generate the required DeepCopy()
functions for all Custom Resource (CR) types.
This command must be run every time the API (spec
and status
) for a custom resource type is updated.
Example output
12.8.5. new Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk new
command creates a new Operator application and generates (or scaffolds) a default project directory layout based on the input <project_name>
.
Argument | Description |
---|---|
| Name of the new project. |
Flag | Description |
---|---|
|
CRD |
|
Generate an Ansible playbook skeleton. Used with |
|
Path to file containing headers for generated Go files. Copied to |
|
Initialize Helm operator with existing Helm chart: |
| Chart repository URL for the requested Helm chart. |
| Specific version of the Helm chart. (Default: latest version) |
| Usage and help output. |
|
CRD |
| Do not initialize the directory as a Git repository. |
|
Type of Operator to initialize: |
Starting with Operator SDK v0.12.0, the --dep-manager
flag and support for dep
-based projects have been removed. Go projects are now scaffolded to use Go modules.
Example usage for Go project
mkdir $GOPATH/src/github.com/example.com/ cd $GOPATH/src/github.com/example.com/ operator-sdk new app-operator
$ mkdir $GOPATH/src/github.com/example.com/
$ cd $GOPATH/src/github.com/example.com/
$ operator-sdk new app-operator
Example usage for Ansible project
operator-sdk new app-operator \ --type=ansible \ --api-version=app.example.com/v1alpha1 \ --kind=AppService
$ operator-sdk new app-operator \
--type=ansible \
--api-version=app.example.com/v1alpha1 \
--kind=AppService
12.8.6. add Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk add
command adds a controller or resource to the project. The command must be run from the Operator project root directory.
Subcommand | Description |
---|---|
|
Adds a new API definition for a new Custom Resource (CR) under |
|
Adds a new controller under |
|
Adds a CRD and the CR files. The
|
Flag | Description |
---|---|
|
CRD |
|
CRD |
Example add api
output
Example add controller
output
Example add crd
output
operator-sdk add crd --api-version app.example.com/v1alpha1 --kind AppService
$ operator-sdk add crd --api-version app.example.com/v1alpha1 --kind AppService
Generating Custom Resource Definition (CRD) files
Create deploy/crds/app_v1alpha1_appservice_crd.yaml
Create deploy/crds/app_v1alpha1_appservice_cr.yaml
12.8.7. test Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk test
command can test the Operator locally.
12.8.7.1. local Copia collegamentoCollegamento copiato negli appunti!
The local
subcommand runs Go tests built using the Operator SDK’s test framework locally.
Arguments | Description |
---|---|
|
Location of e2e test files (e.g., |
Flags | Description |
---|---|
|
Location of |
|
Path to manifest for global resources. Default: |
|
Path to manifest for per-test, namespaced resources. Default: combines |
|
If non-empty, a single namespace to run tests in (e.g., |
|
Extra arguments to pass to |
|
Enable running the Operator locally with |
| Disable test resource creation. |
| Use a different Operator image from the one specified in the namespaced manifest. |
| Usage help output. |
Example output
operator-sdk test local ./test/e2e/ Output:
$ operator-sdk test local ./test/e2e/
# Output:
ok github.com/operator-framework/operator-sdk-samples/memcached-operator/test/e2e 20.410s
12.8.8. run Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk run
command provides options that can launch the Operator in various environments.
Arguments | Description |
---|---|
|
The file path to a Kubernetes configuration file. Defaults: |
|
The Operator is run locally by building the Operator binary with the ability to access a Kubernetes cluster using a |
|
The namespace where the Operator watches for changes. Default: |
|
Flags that the local Operator may need. Example: |
| Usage help output. |
12.8.8.1. --local Copia collegamentoCollegamento copiato negli appunti!
The --local
flag launches the Operator on the local machine by building the Operator binary with the ability to access a Kubernetes cluster using a kubeconfig
file.
Example output
operator-sdk run --local \ --kubeconfig "mycluster.kubecfg" \ --namespace "default" \ --operator-flags "--flag1 value1 --flag2=value2"
$ operator-sdk run --local \
--kubeconfig "mycluster.kubecfg" \
--namespace "default" \
--operator-flags "--flag1 value1 --flag2=value2"
The following example uses the default kubeconfig
, the default namespace environment variable, and passes in flags for the Operator. To use the Operator flags, your Operator must know how to handle the option. For example, for an Operator that understands the resync-interval
flag:
operator-sdk run --local --operator-flags "--resync-interval 10"
$ operator-sdk run --local --operator-flags "--resync-interval 10"
If you are planning on using a different namespace than the default, use the --namespace
flag to change where the Operator is watching for Custom Resources (CRs) to be created:
operator-sdk run --local --namespace "testing"
$ operator-sdk run --local --namespace "testing"
For this to work, your Operator must handle the WATCH_NAMESPACE
environment variable. This can be accomplished using the utility functionk8sutil.GetWatchNamespace
in your Operator.
12.9. Appendices Copia collegamentoCollegamento copiato negli appunti!
12.9.1. Operator project scaffolding layout Copia collegamentoCollegamento copiato negli appunti!
The operator-sdk
CLI generates a number of packages for each Operator project. The following sections describes a basic rundown of each generated file and directory.
12.9.1.1. Go-based projects Copia collegamentoCollegamento copiato negli appunti!
Go-based Operator projects (the default type) generated using the operator-sdk new
command contain the following directories and files:
File/folders | Purpose |
---|---|
|
Contains |
|
Contains the directory tree that defines the APIs of the Custom Resource Definitions (CRDs). Users are expected to edit the |
|
This |
|
Contains the |
| Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment. |
| The Go Dep manifests that describe the external dependencies of this Operator. |
| The golang vendor folder that contains the local copies of the external dependencies that satisfy the imports of this project. Go Dep manages the vendor directly. |
12.9.1.2. Helm-based projects Copia collegamentoCollegamento copiato negli appunti!
Helm-based Operator projects generated using the operator-sdk new --type helm
command contain the following directories and files:
File/folders | Purpose |
---|---|
| Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment. |
|
Contains a Helm chart initialized using the equivalent of the |
|
Contains the |
|
Contains |