Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator


3.1. Release notes

3.1.1. Custom Metrics Autoscaler Operator release notes

The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues.

The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Dedicated horizontal pod autoscaler (HPA).

Note

The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Dedicated. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

3.1.1.1. Supported versions

The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Dedicated version.

VersionOpenShift Dedicated versionGeneral availability

2.14.1

4.16

General availability

2.14.1

4.15

General availability

2.14.1

4.14

General availability

2.14.1

4.13

General availability

2.14.1

4.12

General availability

3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes

This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHSA-2024:7348.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA).

3.1.1.2.1. Bug fixes
  • Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. (OCPBUGS-37989)

3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator

The following release notes are for previous versions of the Custom Metrics Autoscaler Operator.

For the current version, see Custom Metrics Autoscaler Operator release notes.

3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes

This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHBA-2024:5865.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA).

3.1.2.1.1. New features and enhancements
3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator

The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the previous level.

For more information, see Understanding the Cron trigger.

3.1.2.1.2. Bug fixes
  • Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. (OCPBUGS-32521)

3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes

This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHBA-2024:4837.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA).

3.1.2.2.1. New features and enhancements
3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator

The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps.

For more information, see Custom CA certificates for the Custom Metrics Autoscaler.

3.1.2.2.2. Bug fixes
  • Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. (OCPBUGS-34018)

3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes

This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHSA-2024:2901.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA).

3.1.2.3.1. Bug fixes
  • Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. (OCPBUGS-30305)
  • Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue, Request.PostFormValue, or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. (OCPBUGS-30360)
  • Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie. For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. (OCPBUGS-30365)
  • Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. (OCPBUGS-30370)
  • Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. (OCPBUGS-30397)
  • Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. (OCPBUGS-30894)

3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes

This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHBA-2024:2043.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.4.1. Bug fixes
  • Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. (OCPBUGS-32395)

3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes

This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHSA-2024:1812.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.5.1. Bug fixes
  • Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. (OCPBUGS-30145)
  • Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http. This fix corrects the ServiceMonitor CR to reference the proper port name of metrics. As a result, the Service Monitor functions properly. (OCPBUGS-25806)

3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes

This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Dedicated cluster. The following advisory is available for the RHSA-2023:6144.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.6.1. Bug fixes
  • Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. (OCPBUGS-22361)

3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes

This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Dedicated cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.7.1. New features and enhancements
3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported

The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. Previous versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda, enabling installation into ROSA and Dedicated clusters.

3.1.2.7.2. Bug fixes
  • Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn’t get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics…​ error no longer appears inappropriately. (OCPBUGS-15779)
  • Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. (OCPBUGS-15590)

3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes

This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Dedicated cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.8.1. Bug fixes
  • Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. (OCPBUGS-15264)
  • Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. (OCPBUGS-15038)
  • Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. (OCPBUGS-15293)
  • Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none, the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. (OCPBUGS-15274)

3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes

This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Dedicated cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199.

Important

Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA.

3.1.2.9.1. New features and enhancements
3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability

The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1.

Important

Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

3.1.2.9.1.2. Performance metrics

You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator.

3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects

You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready.

3.1.2.9.1.4. Replica fall back for scaled objects

You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source.

3.1.2.9.1.5. Customizable HPA naming for scaled objects

You can now specify a custom name for the horizontal pod autoscaler in scaled objects.

3.1.2.9.1.6. Activation and scaling thresholds

Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies.

3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes

This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Dedicated cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683.

Important

The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature.

3.1.2.10.1. New features and enhancements
3.1.2.10.1.1. Operator upgrade support

You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator.

3.1.2.10.1.2. must-gather support

You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Dedicated must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information.

3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes

This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Dedicated cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042.

Important

The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature.

3.1.2.11.1. New features and enhancements
3.1.2.11.1.1. Audit Logging

You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system.

3.1.2.11.1.2. Scale applications based on Apache Kafka metrics

You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic.

3.1.2.11.1.3. Scale applications based on CPU metrics

You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics.

3.1.2.11.1.4. Scale applications based on memory metrics

You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics.

3.2. Custom Metrics Autoscaler Operator overview

As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Dedicated should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory.

The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics.

The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics.

The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers, also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Dedicated can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling.

To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed.

Note

You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.

The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0, the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase. After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase.

Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation. For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low.

The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10, and the activationThreshold is 50, if the metric reports 40, the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances.

Figure 3.1. Custom metrics autoscaler workflow

Custom metrics autoscaler workflow
  1. You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object.
  2. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger.
  3. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint.
  4. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler.
  5. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas.
  6. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down.
  7. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action.

3.2.1. Custom CA certificates for the Custom Metrics Autoscaler

By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services.

If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler. The Operator loads those certificates on start-up and registers them as trusted by the Operator.

The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file.

Note

If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect.

3.3. Installing the custom metrics autoscaler

You can use the OpenShift Dedicated web console to install the Custom Metrics Autoscaler Operator.

The installation creates the following five CRDs:

  • ClusterTriggerAuthentication
  • KedaController
  • ScaledJob
  • ScaledObject
  • TriggerAuthentication

3.3.1. Installing the custom metrics autoscaler

You can use the following procedure to install the Custom Metrics Autoscaler Operator.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

    If your OpenShift Dedicated cluster is in a cloud account that is owned by Red Hat (non-CCS), you must request cluster-admin privileges.

  • Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator.
  • Remove any versions of the community-based KEDA.

    Also, remove the KEDA 1.x custom resource definitions by running the following commands:

    $ oc delete crd scaledobjects.keda.k8s.io
    $ oc delete crd triggerauthentications.keda.k8s.io
  • Ensure that the keda namespace exists. If not, you must manaully create the keda namespace.
  • Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example:

    $ oc create configmap -n openshift-keda thanos-cert  --from-file=ca-cert.pem

Procedure

  1. In the OpenShift Dedicated web console, click Operators OperatorHub.
  2. Choose Custom Metrics Autoscaler from the list of available Operators, and click Install.
  3. On the Install Operator page, ensure that the A specific namespace on the cluster option is selected for Installation Mode.
  4. For Installed Namespace, click Select a namespace.
  5. Click Select Project:

    • If the keda namespace exists, select keda from the list.
    • If the keda namespace does not exist:

      1. Select Create Project to open the Create Project window.
      2. In the Name field, enter keda.
      3. In the Display Name field, enter a descriptive name, such as keda.
      4. Optional: In the Display Name field, add a description for the namespace.
      5. Click Create.
  6. Click Install.
  7. Verify the installation by listing the Custom Metrics Autoscaler Operator components:

    1. Navigate to Workloads Pods.
    2. Select the keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running.
    3. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running.
  8. Optional: Verify the installation in the OpenShift CLI using the following command:

    $ oc get all -n keda

    The output appears similar to the following:

    Example output

    NAME                                                      READY   STATUS    RESTARTS   AGE
    pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp   1/1     Running   0          18m
    
    NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/custom-metrics-autoscaler-operator   1/1     1            1           18m
    
    NAME                                                            DESIRED   CURRENT   READY   AGE
    replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8   1         1         1       18m

  9. Install the KedaController custom resource, which creates the required CRDs:

    1. In the OpenShift Dedicated web console, click Operators Installed Operators.
    2. Click Custom Metrics Autoscaler.
    3. On the Operator Details page, click the KedaController tab.
    4. On the KedaController tab, click Create KedaController and edit the file.

      kind: KedaController
      apiVersion: keda.sh/v1alpha1
      metadata:
        name: keda
        namespace: keda
      spec:
        watchNamespace: '' 1
        operator:
          logLevel: info 2
          logEncoder: console 3
          caConfigMaps: 4
          - thanos-cert
          - kafka-cert
        metricsServer:
          logLevel: '0' 5
          auditConfig: 6
            logFormat: "json"
            logOutputVolumeClaim: "persistentVolumeClaimName"
            policy:
              rules:
              - level: Metadata
              omitStages: ["RequestReceived"]
              omitManagedFields: false
            lifetime:
              maxAge: "2"
              maxBackup: "1"
              maxSize: "50"
        serviceAccount: {}
      1
      Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty.
      2
      Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug, info, error. The default is info.
      3
      Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json. The default is console.
      4
      Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources.
      5
      Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 or debug. The default is 0.
      6
      Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section.
    5. Click Create to create the KEDA controller.

3.4. Understanding custom metrics autoscaler triggers

Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods.

The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka triggers.

You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow.

You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster.

3.4.1. Understanding the Prometheus trigger

You can scale pods based on Prometheus metrics, which can use the installed OpenShift Dedicated monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Dedicated monitoring" for information on the configurations required to use the OpenShift Dedicated monitoring as a source for metrics.

Note

If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on.

Example scaled object with a Prometheus target

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: prom-scaledobject
  namespace: my-namespace
spec:
# ...
  triggers:
  - type: prometheus 1
    metadata:
      serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2
      namespace: kedatest 3
      metricName: http_requests_total 4
      threshold: '5' 5
      query: sum(rate(http_requests_total{job="test-app"}[1m])) 6
      authModes: basic 7
      cortexOrgID: my-org 8
      ignoreNullValues: "false" 9
      unsafeSsl: "false" 10

1
Specifies Prometheus as the trigger type.
2
Specifies the address of the Prometheus server. This example uses OpenShift Dedicated monitoring.
3
Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Dedicated monitoring as a source for the metrics.
4
Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique.
5
Specifies the value that triggers scaling. Must be specified as a quoted string value.
6
Specifies the Prometheus query to use.
7
Specifies the authentication method to use. Prometheus scalers support bearer authentication (bearer), basic authentication (basic), or TLS authentication (tls). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret.
8
Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return.
9
Optional: Specifies how the trigger should proceed if the Prometheus target is lost.
  • If true, the trigger continues to operate if the Prometheus target is lost. This is the default behavior.
  • If false, the trigger returns an error if the Prometheus target is lost.
10
Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint.
  • If false, the certificate check is performed. This is the default behavior.
  • If true, the certificate check is not performed.

    Important

    Skipping the check is not recommended.

3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Dedicated monitoring

You can use the installed OpenShift Dedicated Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform.

Note

These steps are not required for an external Prometheus source.

You must perform the following tasks, as described in this section:

  • Create a service account.
  • Create a secret that generates a token for the service account.
  • Create the trigger authentication.
  • Create a role.
  • Add that role to the service account.
  • Reference the token in the trigger authentication object used by Prometheus.

Prerequisites

  • OpenShift Dedicated monitoring must be installed.
  • Monitoring of user-defined workloads must be enabled in OpenShift Dedicated monitoring, as described in the Creating a user-defined workload monitoring config map section.
  • The Custom Metrics Autoscaler Operator must be installed.

Procedure

  1. Change to the project with the object you want to scale:

    $ oc project my-project
  2. Create a service account and token, if your cluster does not have one:

    1. Create a service account object by using the following command:

      $ oc create serviceaccount thanos 1
      1
      Specifies the name of the service account.
    2. Create a secret YAML to generate a service account token:

      apiVersion: v1
      kind: Secret
      metadata:
        name: thanos-token
        annotations:
          kubernetes.io/service-account.name: thanos 1
      type: kubernetes.io/service-account-token
      1
      Specifies the name of the service account.
    3. Create the secret object by using the following command:

      $ oc create -f <file_name>.yaml
    4. Use the following command to locate the token assigned to the service account:

      $ oc describe serviceaccount thanos 1
      1
      Specifies the name of the service account.

      Example output

      Name:                thanos
      Namespace:           my-project
      Labels:              <none>
      Annotations:         <none>
      Image pull secrets:  thanos-dockercfg-nnwgj
      Mountable secrets:   thanos-dockercfg-nnwgj
      Tokens:              thanos-token 1
      Events:              <none>

      1
      Use this token in the trigger authentication.
  3. Create a trigger authentication with the service account token:

    1. Create a YAML file similar to the following:

      apiVersion: keda.sh/v1alpha1
      kind: TriggerAuthentication
      metadata:
        name: keda-trigger-auth-prometheus
      spec:
        secretTargetRef: 1
        - parameter: bearerToken 2
          name: thanos-token 3
          key: token 4
        - parameter: ca
          name: thanos-token
          key: ca.crt
      1
      Specifies that this object uses a secret for authorization.
      2
      Specifies the authentication parameter to supply by using the token.
      3
      Specifies the name of the token to use.
      4
      Specifies the key in the token to use with the specified parameter.
    2. Create the CR object:

      $ oc create -f <file-name>.yaml
  4. Create a role for reading Thanos metrics:

    1. Create a YAML file with the following parameters:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: thanos-metrics-reader
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        verbs:
        - get
      - apiGroups:
        - metrics.k8s.io
        resources:
        - pods
        - nodes
        verbs:
        - get
        - list
        - watch
    2. Create the CR object:

      $ oc create -f <file-name>.yaml
  5. Create a role binding for reading Thanos metrics:

    1. Create a YAML file similar to the following:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: thanos-metrics-reader 1
        namespace: my-project 2
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: thanos-metrics-reader
      subjects:
      - kind: ServiceAccount
        name: thanos 3
        namespace: my-project 4
      1
      Specifies the name of the role you created.
      2
      Specifies the namespace of the object you want to scale.
      3
      Specifies the name of the service account to bind to the role.
      4
      Specifies the namespace of the object you want to scale.
    2. Create the CR object:

      $ oc create -f <file-name>.yaml

You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Dedicated monitoring as the source, in the trigger, or scaler, you must include the following parameters:

  • triggers.type must be prometheus
  • triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
  • triggers.metadata.authModes must be bearer
  • triggers.metadata.namespace must be set to the namespace of the object to scale
  • triggers.authenticationRef must point to the trigger authentication resource specified in the previous step

3.4.2. Understanding the CPU trigger

You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics.

The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod.

Note
  • This trigger cannot be used with the ScaledJob custom resource.
  • When using a memory trigger to scale an object, the object does not scale to 0, even if you are using multiple triggers.

Example scaled object with a CPU target

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cpu-scaledobject
  namespace: my-namespace
spec:
# ...
  triggers:
  - type: cpu 1
    metricType: Utilization 2
    metadata:
      value: '60' 3
  minReplicaCount: 1 4

1
Specifies CPU as the trigger type.
2
Specifies the type of metric to use, either Utilization or AverageValue.
3
Specifies the value that triggers scaling. Must be specified as a quoted string value.
  • When using Utilization, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
  • When using AverageValue, the target value is the average of the metrics across all relevant pods.
4
Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics.

3.4.3. Understanding the memory trigger

You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics.

The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers.

Note
  • This trigger cannot be used with the ScaledJob custom resource.
  • When using a memory trigger to scale an object, the object does not scale to 0, even if you are using multiple triggers.

Example scaled object with a memory target

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: memory-scaledobject
  namespace: my-namespace
spec:
# ...
  triggers:
  - type: memory 1
    metricType: Utilization 2
    metadata:
      value: '60' 3
      containerName: api 4

1
Specifies memory as the trigger type.
2
Specifies the type of metric to use, either Utilization or AverageValue.
3
Specifies the value that triggers scaling. Must be specified as a quoted string value.
  • When using Utilization, the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
  • When using AverageValue, the target value is the average of the metrics across all relevant pods.
4
Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled.

3.4.4. Understanding the Kafka trigger

You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job.

Note

If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed:

  • The number of partitions on a topic, if a topic is specified
  • The number of partitions of all topics in the consumer group, if no topic is specified
  • The maxReplicaCount specified in scaled object or scaled job CR

You can use the allowIdleConsumers parameter to disable these default behaviors.

Example scaled object with a Kafka target

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: kafka-scaledobject
  namespace: my-namespace
spec:
# ...
  triggers:
  - type: kafka 1
    metadata:
      topic: my-topic 2
      bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3
      consumerGroup: my-group 4
      lagThreshold: '10' 5
      activationLagThreshold: '5' 6
      offsetResetPolicy: latest 7
      allowIdleConsumers: true 8
      scaleToZeroOnInvalidOffset: false 9
      excludePersistentLag: false 10
      version: '1.0.0' 11
      partitionLimitation: '1,2,10-20,31' 12
      tls: enable 13

1
Specifies Kafka as the trigger type.
2
Specifies the name of the Kafka topic on which Kafka is processing the offset lag.
3
Specifies a comma-separated list of Kafka brokers to connect to.
4
Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag.
5
Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5.
6
Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value.
7
Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest. The default is latest.
8
Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic.
  • If true, the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers.
  • If false, the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default.
9
Specifies how the trigger behaves when a Kafka partition does not have a valid offset.
  • If true, the consumers are scaled to zero for that partition.
  • If false, the scaler keeps a single consumer for that partition. This is the default.
10
Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the previous polling cycle.
  • If true, the scaler excludes partition lag in these partitions.
  • If false, the trigger includes all consumer lag in all partitions. This is the default.
11
Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0.
12
Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions.
13
Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable. For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications".

3.4.5. Understanding the Cron trigger

You can scale pods based on a time range.

When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format.

The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time.

Example scaled object with a Cron trigger

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cron-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: my-deployment
  minReplicaCount: 0 1
  maxReplicaCount: 100 2
  cooldownPeriod: 300
  triggers:
  - type: cron 3
    metadata:
      timezone: Asia/Kolkata 4
      start: "0 6 * * *" 5
      end: "30 18 * * *" 6
      desiredReplicas: "100" 7

1
Specifies the minimum number of pods to scale down to at the end of the time frame.
2
Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas. The default is 100.
3
Specifies a Cron trigger.
4
Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database.
5
Specifies the start of the time frame.
6
Specifies the end of the time frame.
7
Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount.

3.5. Understanding custom metrics autoscaler trigger authentications

A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Dedicated secrets, platform-native pod authentication mechanisms, environment variables, and so on.

You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace.

Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces.

Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object.

Example secret for Basic authentication

apiVersion: v1
kind: Secret
metadata:
  name: my-basic-secret
  namespace: default
data:
  username: "dXNlcm5hbWU=" 1
  password: "cGFzc3dvcmQ="

1
User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded.

Example trigger authentication using a secret for Basic authentication

kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: secret-triggerauthentication
  namespace: my-namespace 1
spec:
  secretTargetRef: 2
  - parameter: username 3
    name: my-basic-secret 4
    key: username 5
  - parameter: password
    name: my-basic-secret
    key: password

1
Specifies the namespace of the object you want to scale.
2
Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
3
Specifies the authentication parameter to supply by using the secret.
4
Specifies the name of the secret to use.
5
Specifies the key in the secret to use with the specified parameter.

Example cluster trigger authentication with a secret for Basic authentication

kind: ClusterTriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata: 1
  name: secret-cluster-triggerauthentication
spec:
  secretTargetRef: 2
  - parameter: username 3
    name: my-basic-secret 4
    key: username 5
  - parameter: password
    name: my-basic-secret
    key: password

1
Note that no namespace is used with a cluster trigger authentication.
2
Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
3
Specifies the authentication parameter to supply by using the secret.
4
Specifies the name of the secret to use.
5
Specifies the key in the secret to use with the specified parameter.

Example secret with certificate authority (CA) details

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: my-namespace
data:
  ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1
  client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2
  client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t...

1
Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded.
2
Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded.

Example trigger authentication using a secret for CA details

kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: secret-triggerauthentication
  namespace: my-namespace 1
spec:
  secretTargetRef: 2
    - parameter: key 3
      name: my-secret 4
      key: client-key.pem 5
    - parameter: ca 6
      name: my-secret 7
      key: ca-cert.pem 8

1
Specifies the namespace of the object you want to scale.
2
Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
3
Specifies the type of authentication to use.
4
Specifies the name of the secret to use.
5
Specifies the key in the secret to use with the specified parameter.
6
Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint.
7
Specifies the name of the secret to use.
8
Specifies the key in the secret to use with the specified parameter.

Example secret with a bearer token

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: my-namespace
data:
  bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1

1
Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded.

Example trigger authentication with a bearer token

kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: token-triggerauthentication
  namespace: my-namespace 1
spec:
  secretTargetRef: 2
  - parameter: bearerToken 3
    name: my-secret 4
    key: bearerToken 5

1
Specifies the namespace of the object you want to scale.
2
Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
3
Specifies the type of authentication to use.
4
Specifies the name of the secret to use.
5
Specifies the key in the token to use with the specified parameter.

Example trigger authentication with an environment variable

kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: env-var-triggerauthentication
  namespace: my-namespace 1
spec:
  env: 2
  - parameter: access_key 3
    name: ACCESS_KEY 4
    containerName: my-container 5

1
Specifies the namespace of the object you want to scale.
2
Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint.
3
Specify the parameter to set with this variable.
4
Specify the name of the environment variable.
5
Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object.

Example trigger authentication with pod authentication providers

kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
  name: pod-id-triggerauthentication
  namespace: my-namespace 1
spec:
  podIdentity: 2
    provider: aws-eks 3

1
Specifies the namespace of the object you want to scale.
2
Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint.
3
Specifies a pod identity. Supported values are none, azure, gcp, aws-eks, or aws-kiam. The default is none.

Additional resources

3.5.1. Using trigger authentications

You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.
  • If you are using a secret, the Secret object must exist, for example:

    Example secret

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-secret
    data:
      user-name: <base64_USER_NAME>
      password: <base64_USER_PASSWORD>

Procedure

  1. Create the TriggerAuthentication or ClusterTriggerAuthentication object.

    1. Create a YAML file that defines the object:

      Example trigger authentication with a secret

      kind: TriggerAuthentication
      apiVersion: keda.sh/v1alpha1
      metadata:
        name: prom-triggerauthentication
        namespace: my-namespace
      spec:
        secretTargetRef:
        - parameter: user-name
          name: my-secret
          key: USER_NAME
        - parameter: password
          name: my-secret
          key: USER_PASSWORD

    2. Create the TriggerAuthentication object:

      $ oc create -f <filename>.yaml
  2. Create or edit a ScaledObject YAML file that uses the trigger authentication:

    1. Create a YAML file that defines the object by running the following command:

      Example scaled object with a trigger authentication

      apiVersion: keda.sh/v1alpha1
      kind: ScaledObject
      metadata:
        name: scaledobject
        namespace: my-namespace
      spec:
        scaleTargetRef:
          name: example-deployment
        maxReplicaCount: 100
        minReplicaCount: 0
        pollingInterval: 30
        triggers:
        - type: prometheus
          metadata:
            serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
            namespace: kedatest # replace <NAMESPACE>
            metricName: http_requests_total
            threshold: '5'
            query: sum(rate(http_requests_total{job="test-app"}[1m]))
            authModes: "basic"
          authenticationRef:
            name: prom-triggerauthentication 1
            kind: TriggerAuthentication 2

      1
      Specify the name of your trigger authentication object.
      2
      Specify TriggerAuthentication. TriggerAuthentication is the default.

      Example scaled object with a cluster trigger authentication

      apiVersion: keda.sh/v1alpha1
      kind: ScaledObject
      metadata:
        name: scaledobject
        namespace: my-namespace
      spec:
        scaleTargetRef:
          name: example-deployment
        maxReplicaCount: 100
        minReplicaCount: 0
        pollingInterval: 30
        triggers:
        - type: prometheus
          metadata:
            serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
            namespace: kedatest # replace <NAMESPACE>
            metricName: http_requests_total
            threshold: '5'
            query: sum(rate(http_requests_total{job="test-app"}[1m]))
            authModes: "basic"
          authenticationRef:
            name: prom-cluster-triggerauthentication 1
            kind: ClusterTriggerAuthentication 2

      1
      Specify the name of your trigger authentication object.
      2
      Specify ClusterTriggerAuthentication.
    2. Create the scaled object by running the following command:

      $ oc apply -f <filename>

3.6. Pausing the custom metrics autoscaler for a scaled object

You can pause and restart the autoscaling of a workload, as needed.

For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads.

3.6.1. Pausing a custom metrics autoscaler

You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  annotations:
    autoscaling.keda.sh/paused-replicas: "4"
# ...

Procedure

  1. Use the following command to edit the ScaledObject CR for your workload:

    $ oc edit ScaledObject scaledobject
  2. Add the autoscaling.keda.sh/paused-replicas annotation with any value:

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      annotations:
        autoscaling.keda.sh/paused-replicas: "4" 1
      creationTimestamp: "2023-02-08T14:41:01Z"
      generation: 1
      name: scaledobject
      namespace: my-project
      resourceVersion: '65729'
      uid: f5aec682-acdf-4232-a783-58b5b82f5dd0
    1
    Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling.

3.6.2. Restarting the custom metrics autoscaler for a scaled object

You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  annotations:
    autoscaling.keda.sh/paused-replicas: "4"
# ...

Procedure

  1. Use the following command to edit the ScaledObject CR for your workload:

    $ oc edit ScaledObject scaledobject
  2. Remove the autoscaling.keda.sh/paused-replicas annotation.

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      annotations:
        autoscaling.keda.sh/paused-replicas: "4" 1
      creationTimestamp: "2023-02-08T14:41:01Z"
      generation: 1
      name: scaledobject
      namespace: my-project
      resourceVersion: '65729'
      uid: f5aec682-acdf-4232-a783-58b5b82f5dd0
    1
    Remove this annotation to restart a paused custom metrics autoscaler.

3.7. Gathering audit logs

You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system.

For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application.

3.7.1. Configuring audit logging

You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.

Procedure

  1. Edit the KedaController custom resource to add the auditConfig stanza:

    kind: KedaController
    apiVersion: keda.sh/v1alpha1
    metadata:
      name: keda
      namespace: keda
    spec:
    # ...
      metricsServer:
    # ...
        auditConfig:
          logFormat: "json" 1
          logOutputVolumeClaim: "pvc-audit-log" 2
          policy:
            rules: 3
            - level: Metadata
            omitStages: "RequestReceived" 4
            omitManagedFields: false 5
          lifetime: 6
            maxAge: "2"
            maxBackup: "1"
            maxSize: "50"
    1
    Specifies the output format of the audit log, either legacy or json.
    2
    Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout.
    3
    Specifies which events should be recorded and what data they should include:
    • None: Do not log events.
    • Metadata: Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default.
    • Request: Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests.
    • RequestResponse: Log event metadata, request text, and response text. This option does not apply for non-resource requests.
    4
    Specifies stages for which no event is created.
    5
    Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields.
    6
    Specifies the size and lifespan of the audit logs.
    • maxAge: The maximum number of days to retain audit log files, based on the timestamp encoded in their filename.
    • maxBackup: The maximum number of audit log files to retain. Set to 0 to retain all audit log files.
    • maxSize: The maximum size in megabytes of an audit log file before it gets rotated.

Verification

  1. View the audit log file directly:

    1. Obtain the name of the keda-metrics-apiserver-* pod:

      oc get pod -n keda

      Example output

      NAME                                                  READY   STATUS    RESTARTS   AGE
      custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv   1/1     Running   0          8m20s
      keda-metrics-apiserver-65c7cc44fd-rrl4r               1/1     Running   0          2m55s
      keda-operator-776cbb6768-zpj5b                        1/1     Running   0          2m55s

    2. View the log data by using a command similar to the following:

      $ oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1
      1
      Optional: You can use the grep command to specify the log level to display: Metadata, Request, RequestResponse.

      For example:

      $ oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata

      Example output

       ...
      {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.28","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
       ...

  2. Alternatively, you can view a specific log:

    1. Use a command similar to the following to log into the keda-metrics-apiserver-* pod:

      $ oc rsh pod/keda-metrics-apiserver-<hash> -n keda

      For example:

      $ oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n keda
    2. Change to the /var/audit-policy/ directory:

      sh-4.4$ cd /var/audit-policy/
    3. List the available logs:

      sh-4.4$ ls

      Example output

      log-2023.02.17-14:50  policy.yaml

    4. View the log, as needed:

      sh-4.4$ cat <log_name>/<pvc_name>|grep -i <log_level> 1
      1
      Optional: You can use the grep command to specify the log level to display: Metadata, Request, RequestResponse.

      For example:

      sh-4.4$ cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request

      Example output

       ...
      {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Request","auditID":"63e7f68c-04ec-4f4d-8749-bf1656572a41","stage":"ResponseComplete","requestURI":"/openapi/v2","verb":"get","user":{"username":"system:aggregator","groups":["system:authenticated"]},"sourceIPs":["10.128.0.1"],"responseStatus":{"metadata":{},"code":304},"requestReceivedTimestamp":"2023-02-17T13:12:55.035478Z","stageTimestamp":"2023-02-17T13:12:55.038346Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\""}}
       ...

3.8. Gathering debugging data

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

To help troubleshoot your issue, provide the following information:

  • Data gathered using the must-gather tool.
  • The unique cluster ID.

You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items:

  • The keda namespace and its child objects.
  • The Custom Metric Autoscaler Operator installation objects.
  • The Custom Metric Autoscaler Operator CRD objects.

3.8.1. Gathering debugging data

The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator:

$ oc adm must-gather --image="$(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \
-n openshift-marketplace \
-o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')"
Note

The standard OpenShift Dedicated must-gather command, oc adm must-gather, does not collect Custom Metrics Autoscaler Operator data.

Prerequisites

  • You are logged in to OpenShift Dedicated as a user with the dedicated-admin role.
  • The OpenShift Dedicated CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Perform one of the following:

    • To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command:

      $ oc adm must-gather --image="$(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \
      -n openshift-marketplace \
      -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')"

      The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available.

    • To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information:

      1. Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable:

        $ IMAGE="$(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \
          -n openshift-marketplace \
          -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')"
      2. Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image:

        $ oc adm must-gather --image-stream=openshift/must-gather --image=${IMAGE}

    Example 3.1. Example must-gather output for the Custom Metric Autoscaler

    └── keda
        ├── apps
        │   ├── daemonsets.yaml
        │   ├── deployments.yaml
        │   ├── replicasets.yaml
        │   └── statefulsets.yaml
        ├── apps.openshift.io
        │   └── deploymentconfigs.yaml
        ├── autoscaling
        │   └── horizontalpodautoscalers.yaml
        ├── batch
        │   ├── cronjobs.yaml
        │   └── jobs.yaml
        ├── build.openshift.io
        │   ├── buildconfigs.yaml
        │   └── builds.yaml
        ├── core
        │   ├── configmaps.yaml
        │   ├── endpoints.yaml
        │   ├── events.yaml
        │   ├── persistentvolumeclaims.yaml
        │   ├── pods.yaml
        │   ├── replicationcontrollers.yaml
        │   ├── secrets.yaml
        │   └── services.yaml
        ├── discovery.k8s.io
        │   └── endpointslices.yaml
        ├── image.openshift.io
        │   └── imagestreams.yaml
        ├── k8s.ovn.org
        │   ├── egressfirewalls.yaml
        │   └── egressqoses.yaml
        ├── keda.sh
        │   ├── kedacontrollers
        │   │   └── keda.yaml
        │   ├── scaledobjects
        │   │   └── example-scaledobject.yaml
        │   └── triggerauthentications
        │       └── example-triggerauthentication.yaml
        ├── monitoring.coreos.com
        │   └── servicemonitors.yaml
        ├── networking.k8s.io
        │   └── networkpolicies.yaml
        ├── keda.yaml
        ├── pods
        │   ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx
        │   │   ├── custom-metrics-autoscaler-operator
        │   │   │   └── custom-metrics-autoscaler-operator
        │   │   │       └── logs
        │   │   │           ├── current.log
        │   │   │           ├── previous.insecure.log
        │   │   │           └── previous.log
        │   │   └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml
        │   ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh
        │   │   └── custom-metrics-autoscaler-operator
        │   │       └── custom-metrics-autoscaler-operator
        │   │           └── logs
        │   ├── keda-metrics-apiserver-65c7cc44fd-6wq4g
        │   │   ├── keda-metrics-apiserver
        │   │   │   └── keda-metrics-apiserver
        │   │   │       └── logs
        │   │   │           ├── current.log
        │   │   │           ├── previous.insecure.log
        │   │   │           └── previous.log
        │   │   └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml
        │   └── keda-operator-776cbb6768-fb6m5
        │       ├── keda-operator
        │       │   └── keda-operator
        │       │       └── logs
        │       │           ├── current.log
        │       │           ├── previous.insecure.log
        │       │           └── previous.log
        │       └── keda-operator-776cbb6768-fb6m5.yaml
        ├── policy
        │   └── poddisruptionbudgets.yaml
        └── route.openshift.io
            └── routes.yaml
  3. Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1
    1
    Replace must-gather-local.5421342344627712289/ with the actual directory name.
  4. Attach the compressed file to your support case on the Red Hat Customer Portal.

3.9. Viewing Operator metrics

The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts.

3.9.1. Accessing performance metrics

You can access the metrics and run queries by using the OpenShift Dedicated web console.

Procedure

  1. Select the Administrator perspective in the OpenShift Dedicated web console.
  2. Select Observe Metrics.
  3. To create a custom query, add your PromQL query to the Expression field.
  4. To add multiple queries, select Add Query.

3.9.1.1. Provided Operator metrics

The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Dedicated web console.

Table 3.1. Custom Metric Autoscaler Operator metrics
Metric nameDescription

keda_scaler_activity

Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive.

keda_scaler_metrics_value

The current value for each scaler’s metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average.

keda_scaler_metrics_latency

The latency of retrieving the current metric from each scaler.

keda_scaler_errors

The number of errors that have occurred for each scaler.

keda_scaler_errors_total

The total number of errors encountered for all scalers.

keda_scaled_object_errors

The number of errors that have occurred for each scaled obejct.

keda_resource_totals

The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type.

keda_trigger_totals

The total number of triggers by trigger type.

Custom Metrics Autoscaler Admission webhook metrics

The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics.

Metric nameDescription

keda_scaled_object_validation_total

The number of scaled object validations.

keda_scaled_object_validation_errors

The number of validation errors.

3.10. Understanding how to add custom metrics autoscalers

To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job.

You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.

3.10.1. Adding a custom metrics autoscaler to a workload

You can create a custom metrics autoscaler for a workload that is created by a Deployment, StatefulSet, or custom resource object.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.
  • If you use a custom metrics autoscaler for scaling based on CPU or memory:

    • Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage.

      $ oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal

      Example output

      Name:         openshift-kube-scheduler-ip-10-0-135-131.ec2.internal
      Namespace:    openshift-kube-scheduler
      Labels:       <none>
      Annotations:  <none>
      API Version:  metrics.k8s.io/v1beta1
      Containers:
        Name:  wait-for-host-port
        Usage:
          Memory:  0
        Name:      scheduler
        Usage:
          Cpu:     8m
          Memory:  45440Ki
      Kind:        PodMetrics
      Metadata:
        Creation Timestamp:  2019-05-23T18:47:56Z
        Self Link:           /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal
      Timestamp:             2019-05-23T18:47:56Z
      Window:                1m0s
      Events:                <none>

    • The pods associated with the object you want to scale must include specified memory and CPU limits. For example:

      Example pod spec

      apiVersion: v1
      kind: Pod
      # ...
      spec:
        containers:
        - name: app
          image: images.my-company.example/app:v4
          resources:
            limits:
              memory: "128Mi"
              cpu: "500m"
      # ...

Procedure

  1. Create a YAML file similar to the following. Only the name <2>, object name <4>, and object kind <5> are required:

    Example scaled object

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      annotations:
        autoscaling.keda.sh/paused-replicas: "0" 1
      name: scaledobject 2
      namespace: my-namespace
    spec:
      scaleTargetRef:
        apiVersion: apps/v1 3
        name: example-deployment 4
        kind: Deployment 5
        envSourceContainerName: .spec.template.spec.containers[0] 6
      cooldownPeriod:  200 7
      maxReplicaCount: 100 8
      minReplicaCount: 0 9
      metricsServer: 10
        auditConfig:
          logFormat: "json"
          logOutputVolumeClaim: "persistentVolumeClaimName"
          policy:
            rules:
            - level: Metadata
            omitStages: "RequestReceived"
            omitManagedFields: false
          lifetime:
            maxAge: "2"
            maxBackup: "1"
            maxSize: "50"
      fallback: 11
        failureThreshold: 3
        replicas: 6
      pollingInterval: 30 12
      advanced:
        restoreToOriginalReplicaCount: false 13
        horizontalPodAutoscalerConfig:
          name: keda-hpa-scale-down 14
          behavior: 15
            scaleDown:
              stabilizationWindowSeconds: 300
              policies:
              - type: Percent
                value: 100
                periodSeconds: 15
      triggers:
      - type: prometheus 16
        metadata:
          serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
          namespace: kedatest
          metricName: http_requests_total
          threshold: '5'
          query: sum(rate(http_requests_total{job="test-app"}[1m]))
          authModes: basic
        authenticationRef: 17
          name: prom-triggerauthentication
          kind: TriggerAuthentication

    1
    Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section.
    2
    Specifies a name for this custom metrics autoscaler.
    3
    Optional: Specifies the API version of the target resource. The default is apps/v1.
    4
    Specifies the name of the object that you want to scale.
    5
    Specifies the kind as Deployment, StatefulSet or CustomResource.
    6
    Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0].
    7
    Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0. The default is 300.
    8
    Optional: Specifies the maximum number of replicas when scaling up. The default is 100.
    9
    Optional: Specifies the minimum number of replicas when scaling down.
    10
    Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section.
    11
    Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation.
    12
    Optional: Specifies the interval in seconds to check each trigger on. The default is 30.
    13
    Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false, which keeps the replica count as it is when the scaled object is deleted.
    14
    Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name}.
    15
    Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section.
    16
    Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Dedicated monitoring.
    17
    Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section.
    • Enter TriggerAuthentication to use a trigger authentication. This is the default.
    • Enter ClusterTriggerAuthentication to use a cluster trigger authentication.
  2. Create the custom metrics autoscaler by running the following command:

    $ oc create -f <filename>.yaml

Verification

  • View the command output to verify that the custom metrics autoscaler was created:

    $ oc get scaledobject <scaled_object_name>

    Example output

    NAME            SCALETARGETKIND      SCALETARGETNAME        MIN   MAX   TRIGGERS     AUTHENTICATION               READY   ACTIVE   FALLBACK   AGE
    scaledobject    apps/v1.Deployment   example-deployment     0     50    prometheus   prom-triggerauthentication   True    True     True       17s

    Note the following fields in the output:

    • TRIGGERS: Indicates the trigger, or scaler, that is being used.
    • AUTHENTICATION: Indicates the name of any trigger authentication being used.
    • READY: Indicates whether the scaled object is ready to start scaling:

      • If True, the scaled object is ready.
      • If False, the scaled object is not ready because of a problem in one or more of the objects you created.
    • ACTIVE: Indicates whether scaling is taking place:

      • If True, scaling is taking place.
      • If False, scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created.
    • FALLBACK: Indicates whether the custom metrics autoscaler is able to get metrics from the source

      • If False, the custom metrics autoscaler is getting metrics.
      • If True, the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created.

3.10.2. Additional resources

3.11. Removing the Custom Metrics Autoscaler Operator

You can remove the custom metrics autoscaler from your OpenShift Dedicated cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues.

Note

Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Dedicated can hang when you delete the keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR.

3.11.1. Uninstalling the Custom Metrics Autoscaler Operator

Use the following procedure to remove the custom metrics autoscaler from your OpenShift Dedicated cluster.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.

Procedure

  1. In the OpenShift Dedicated web console, click Operators Installed Operators.
  2. Switch to the keda project.
  3. Remove the KedaController custom resource.

    1. Find the CustomMetricsAutoscaler Operator and click the KedaController tab.
    2. Find the custom resource, and then click Delete KedaController.
    3. Click Uninstall.
  4. Remove the Custom Metrics Autoscaler Operator:

    1. Click Operators Installed Operators.
    2. Find the CustomMetricsAutoscaler Operator and click the Options menu kebab and select Uninstall Operator.
    3. Click Uninstall.
  5. Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components:

    1. Delete the custom metrics autoscaler CRDs:

      • clustertriggerauthentications.keda.sh
      • kedacontrollers.keda.sh
      • scaledjobs.keda.sh
      • scaledobjects.keda.sh
      • triggerauthentications.keda.sh
      $ oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh

      Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted.

    2. List any custom metrics autoscaler cluster roles:

      $ oc get clusterrole | grep keda.sh
    3. Delete the listed custom metrics autoscaler cluster roles. For example:

      $ oc delete clusterrole.keda.sh-v1alpha1-admin
    4. List any custom metrics autoscaler cluster role bindings:

      $ oc get clusterrolebinding | grep keda.sh
    5. Delete the listed custom metrics autoscaler cluster role bindings. For example:

      $ oc delete clusterrolebinding.keda.sh-v1alpha1-admin
  6. Delete the custom metrics autoscaler project:

    $ oc delete project keda
  7. Delete the Cluster Metric Autoscaler Operator:

    $ oc delete operator/openshift-custom-metrics-autoscaler-operator.keda
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.