4.6. Validating Operators using the scorecard
Operator authors should validate that their Operator is packaged correctly and free of syntax errors. As an Operator author, you can use the Operator SDK scorecard tool to validate your Operator packaging and run tests.
OpenShift Container Platform 4.5 supports Operator SDK v0.17.2.
4.6.1. About the scorecard tool
To validate an Operator, the scorecard tool provided by the Operator SDK begins by creating all resources required by any related custom resources (CRs) and the Operator. The scorecard then creates a proxy container in the deployment of the Operator which is used to record calls to the API server and run some of the tests. The tests performed also examine some of the parameters in the CRs.
4.6.2. Scorecard configuration
The scorecard tool uses a configuration file that allows you to configure internal plug-ins, as well as several global configuration options.
4.6.2.1. Configuration file
The default location for the scorecard tool configuration is the <project_dir>/.osdk-scorecard.*
. The following is an example of a YAML-formatted configuration file:
Scorecard configuration file
scorecard: output: json plugins: - basic: 1 cr-manifest: - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml" - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml" - olm: 2 cr-manifest: - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml" - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml" csv-path: "deploy/olm-catalog/memcached-operator/0.0.3/memcached-operator.v0.0.3.clusterserviceversion.yaml"
Configuration methods for global options take the following priority, highest to lowest:
Command arguments (if available)
The configuration file must be in YAML format. As the configuration file might be extended to allow configuration of all operator-sdk
subcommands in the future, the scorecard configuration must be under a scorecard
subsection.
Configuration file support is provided by the viper
package. For more info on how viper
configuration works, see the README.
4.6.2.2. Command arguments
While most of the scorecard tool configuration is done using a configuration file, you can also use the following arguments:
Flag | Type | Description |
---|---|---|
| string | The path to a bundle directory used for the bundle validation test. |
| string |
The path to the scorecard configuration file. The default is |
| string |
Output format. Valid options are |
| string |
The path to the |
| string |
The version of scorecard to run. The default and only valid option is |
| string | The label selector to filter tests on. |
| bool |
If |
4.6.2.3. Configuration file options
The scorecard configuration file provides the following options:
Option | Type | Description |
---|---|---|
| string |
Equivalent of the |
| string |
Equivalent of the |
| string |
Equivalent of the |
| array | An array of plug-in names. |
4.6.2.3.1. Basic and OLM plug-ins
The scorecard supports the internal basic
and olm
plug-ins, which are configured by a plugins
section in the configuration file.
Option | Type | Description |
---|---|---|
| []string |
The path(s) for CRs being tested. Required if |
| string |
The path to the cluster service version (CSV) for the Operator. Required for OLM tests or if |
| bool | Indicates that the CSV and relevant CRDs have been deployed onto the cluster by OLM. |
| string |
The path to the |
| string |
The namespace to run the plug-ins in. If unset, the default specified by the |
| int | Time in seconds until a timeout during initialization of the Operator. |
| string | The path to the directory containing CRDs that must be deployed to the cluster. |
| string |
The manifest file with all resources that run within a namespace. By default, the scorecard combines the |
| string |
The manifest containing required resources that run globally (not namespaced). By default, the scorecard combines all CRDs in the |
Currently, using the scorecard with a CSV does not permit multiple CR manifests to be set through the CLI, configuration file, or CSV annotations. You must tear down your Operator in the cluster, re-deploy, and re-run the scorecard for each CR that is tested.
Additional resources
-
You can either set
cr-manifest
or your CSVmetadata.annotations['alm-examples']
to provide CRs to the scorecard, but not both. See CRD templates for details.
4.6.3. Tests performed
By default, the scorecard tool has a set of internal tests it can run available across two internal plug-ins. If multiple CRs are specified for a plug-in, the test environment is fully cleaned up after each CR so that each CR gets a clean testing environment.
Each test has a short name that uniquely identifies the test. This is useful when selecting a specific test or tests to run. For example:
$ operator-sdk scorecard -o text --selector=test=checkspectest
$ operator-sdk scorecard -o text --selector='test in (checkspectest,checkstatustest)'
4.6.3.1. Basic plug-in
The following basic Operator tests are available from the basic
plug-in:
Test | Description | Short name |
---|---|---|
Spec Block Exists |
This test checks the custom resources (CRs) created in the cluster to make sure that all CRs have a |
|
Status Block Exists |
This test checks the CRs created in the cluster to make sure that all CRs have a |
|
Writing Into CRs Has An Effect |
This test reads the scorecard proxy logs to verify that the Operator is making |
|
4.6.3.2. OLM plug-in
The following Operator Lifecycle Manager (OLM) integration tests are available from the olm
plug-in:
Test | Description | Short name |
---|---|---|
OLM Bundle Validation | This test validates the OLM bundle manifests found in the bundle directory as specified by the bundle flag. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library. |
|
Provided APIs Have Validation |
This test verifies that the CRDs for the provided CRs contain a validation section and that there is validation for each |
|
Owned CRDs Have Resources Listed |
This test makes sure that the CRDs for each CR provided by the |
|
Spec Fields With Descriptors |
This test verifies that every field in the |
|
Status Fields With Descriptors |
This test verifies that every field in the |
|
Additional resources
4.6.4. Running the scorecard
Prerequisites
The following prerequisites for the Operator project are checked by the scorecard tool:
- Access to a cluster running Kubernetes 1.11.3 or later.
-
If you want to use the scorecard to check the integration of your Operator project with Operator Lifecycle Manager (OLM), then a cluster service version (CSV) file is also required. This is a requirement when the
olm-deployed
option is used. For Operators that were not generated using the Operator SDK (non-SDK Operators):
- Resource manifests for installing and configuring the Operator and custom resources (CRs).
-
Configuration getter that supports reading from the
KUBECONFIG
environment variable, such as theclientcmd
orcontroller-runtime
configuration getters. This is required for the scorecard proxy to work correctly.
Procedure
-
Define a
.osdk-scorecard.yaml
configuration file in your Operator project. -
Create the namespace defined in the RBAC files (
role_binding
). Run the scorecard from the root directory of your Operator project:
$ operator-sdk scorecard
The scorecard return code is
1
if any of the executed texts did not pass and0
if all selected tests passed.
4.6.5. Running the scorecard with an OLM-managed Operator
The scorecard can be run using a cluster service version (CSV), providing a way to test cluster-ready and non-Operator SDK Operators.
Procedure
The scorecard requires a proxy container in the deployment pod of the Operator to read Operator logs. A few modifications to your CSV and creation of one extra object are required to run the proxy before deploying your Operator with Operator Lifecycle Manager (OLM).
This step can be performed manually or automated using bash functions. Choose one of the following methods.
Manual method:
Create a proxy server secret containing a local
kubeconfig
file`.Generate a user name using the namespaced owner reference of the scorecard proxy.
$ echo '{"apiVersion":"","kind":"","name":"scorecard","uid":"","Namespace":"'<namespace>'"}' | base64 -w 0 1
- 1
- Replace
<namespace>
with the namespace your Operator will deploy in.
Write a
Config
manifestscorecard-config.yaml
using the following template, replacing<username>
with the base64 user name generated in the previous step:apiVersion: v1 kind: Config clusters: - cluster: insecure-skip-tls-verify: true server: http://<username>@localhost:8889 name: proxy-server contexts: - context: cluster: proxy-server user: admin/proxy-server name: <namespace>/proxy-server current-context: <namespace>/proxy-server preferences: {} users: - name: admin/proxy-server user: username: <username> password: unused
Encode the
Config
as base64:$ cat scorecard-config.yaml | base64 -w 0
Create a
Secret
manifestscorecard-secret.yaml
:apiVersion: v1 kind: Secret metadata: name: scorecard-kubeconfig namespace: <namespace> 1 data: kubeconfig: <kubeconfig_base64> 2
Apply the secret:
$ oc apply -f scorecard-secret.yaml
Insert a volume referring to the secret into the deployment for the Operator:
spec: install: spec: deployments: - name: memcached-operator spec: ... template: ... spec: containers: ... volumes: - name: scorecard-kubeconfig 1 secret: secretName: scorecard-kubeconfig items: - key: kubeconfig path: config
- 1
- Scorecard
kubeconfig
volume.
Insert a volume mount and
KUBECONFIG
environment variable into each container in the deployment of your Operator:spec: install: spec: deployments: - name: memcached-operator spec: ... template: ... spec: containers: - name: container1 ... volumeMounts: - name: scorecard-kubeconfig 1 mountPath: /scorecard-secret env: - name: KUBECONFIG 2 value: /scorecard-secret/config - name: container2 3 ...
Insert the scorecard proxy container into the deployment of your Operator:
spec: install: spec: deployments: - name: memcached-operator spec: ... template: ... spec: containers: ... - name: scorecard-proxy 1 command: - scorecard-proxy env: - name: WATCH_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: quay.io/operator-framework/scorecard-proxy:master imagePullPolicy: Always ports: - name: proxy containerPort: 8889
- 1
- Scorecard proxy container.
Automated method:
The
community-operators
repository has several bash functions that can perform the previous steps in the procedure for you.Run the following
curl
command:$ curl -Lo csv-manifest-modifiers.sh \ https://raw.githubusercontent.com/operator-framework/community-operators/master/scripts/lib/file
Source the
csv-manifest-modifiers.sh
file:$ . ./csv-manifest-modifiers.sh
Create the
kubeconfig
secret file:$ create_kubeconfig_secret_file scorecard-secret.yaml "<namespace>" 1
- 1
- Replace
<namespace>
with the namespace your Operator will deploy in.
Apply the secret:
$ oc apply -f scorecard-secret.yaml
Insert the
kubeconfig
volume:$ insert_kubeconfig_volume "<csv_file>" 1
- 1
- Replace
<csv_file>
with the path to your CSV manifest.
Insert the
kubeconfig
secret mount:$ insert_kubeconfig_secret_mount "<csv_file>"
Insert the proxy container:
$ insert_proxy_container "<csv_file>" "quay.io/operator-framework/scorecard-proxy:master"
- After inserting the proxy container, follow the steps in the Getting started with the Operator SDK guide to bundle your CSV and custom resource definitions (CRDs) and deploy your Operator on OLM.
-
After your Operator has been deployed on OLM, define a
.osdk-scorecard.yaml
configuration file in your Operator project and ensure both thecsv-path: <csv_manifest_path>
andolm-deployed
options are set. Run the scorecard with both the
csv-path: <csv_manifest_path>
andolm-deployed
options set in your scorecard configuration file:$ operator-sdk scorecard
Additional resources