Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Autoscaling
3.1. Autoscaling Link kopierenLink in die Zwischenablage kopiert!
Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand. For example, if an application is receiving no traffic, and scale-to-zero is enabled, Knative Serving scales the application down to zero replicas. If scale-to-zero is disabled, the application is scaled down to the minimum number of replicas configured for applications on the cluster. Replicas can also be scaled up to meet demand if traffic to the application increases.
Autoscaling settings for Knative services can be global settings that are configured by cluster administrators (or dedicated administrators for Red Hat OpenShift Service on AWS and OpenShift Dedicated), or per-revision settings that are configured for individual services.
You can modify per-revision settings for your services by using the OpenShift Container Platform web console, by modifying the YAML file for your service, or by using the Knative (kn) CLI.
Any limits or targets that you set for a service are measured against a single instance of your application. For example, setting the target annotation to 50 configures the autoscaler to scale the application so that each revision handles 50 requests at a time.
3.2. Scale bounds Link kopierenLink in die Zwischenablage kopiert!
Scale bounds determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs.
3.2.1. Minimum scale bounds Link kopierenLink in die Zwischenablage kopiert!
The min-scale annotation sets the minimum number of replicas that serve an application. If you do not enable scale to zero, the min-scale value defaults to 1.
The min-scale value defaults to 0 replicas when the following conditions apply:
-
You do not set the
min-scaleannotation. - You enable scaling to zero.
-
You use the
KPAclass.
Example service spec with min-scale annotation
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: showcase
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/min-scale: "0"
...
3.2.1.1. Setting the min-scale annotation by using the Knative CLI Link kopierenLink in die Zwischenablage kopiert!
Using the Knative (kn) CLI to set the min-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-min flag to create or change the min-scale value for a service.
Prerequisites
- You have installed Knative Serving on the cluster.
-
You have installed the Knative (
kn) CLI.
Procedure
Set the minimum number of replicas for the service by using the
--scale-minflag:$ kn service create <service_name> --image <image_uri> --scale-min <integer>Example command
$ kn service create showcase --image quay.io/openshift-knative/showcase --scale-min 2
3.2.2. Maximum scale bounds Link kopierenLink in die Zwischenablage kopiert!
The max-scale annotation determines the maximum number of replicas that can serve an application. If the max-scale annotation is not set, there is no upper limit for the number of replicas created.
Example service spec with max-scale annotation
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: showcase
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/max-scale: "10"
...
3.2.2.1. Setting the max-scale annotation by using the Knative CLI Link kopierenLink in die Zwischenablage kopiert!
Using the Knative (kn) CLI to set the max-scale annotation provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service command with the --scale-max flag to create or change the max-scale value for a service.
Prerequisites
- You have installed Knative Serving on the cluster.
-
You have installed the Knative (
kn) CLI.
Procedure
Set the maximum number of replicas for the service by using the
--scale-maxflag:$ kn service create <service_name> --image <image_uri> --scale-max <integer>Example command
$ kn service create showcase --image quay.io/openshift-knative/showcase --scale-max 10
3.3. Concurrency Link kopierenLink in die Zwischenablage kopiert!
Concurrency determines the number of simultaneous requests that can be processed by each replica of an application at any given time. Concurrency can be configured as a soft limit or a hard limit:
- A soft limit is a targeted requests limit, rather than a strictly enforced bound. For example, if there is a sudden burst of traffic, the soft limit target can be exceeded.
A hard limit is a strictly enforced upper bound requests limit. If concurrency reaches the hard limit, surplus requests are buffered and must wait until there is enough free capacity to execute the requests.
ImportantUsing a hard limit configuration is only recommended if there is a clear use case for it with your application. Having a low, hard limit specified may have a negative impact on the throughput and latency of an application, and might cause cold starts.
Adding a soft target and a hard limit means that the autoscaler targets the soft target number of concurrent requests, but imposes a hard limit of the hard limit value for the maximum number of requests.
If the hard limit value is less than the soft limit value, the soft limit value is tuned down, because there is no need to target more requests than the number that can actually be handled.
3.3.1. Configuring a soft concurrency target Link kopierenLink in die Zwischenablage kopiert!
A soft limit sets a target for the number of requests rather than a strictly enforced bound. For example, during a sudden traffic burst, concurrency can exceed the soft limit. You can specify a soft concurrency target for your Knative service by setting the autoscaling.knative.dev/target annotation in the spec, or by using the kn service command with the correct flags.
Procedure
Optional: Set the
autoscaling.knative.dev/targetannotation for your Knative service in the spec of theServicecustom resource:Example service spec
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: "200"Optional: Use the
kn servicecommand to specify the--concurrency-targetflag:$ kn service create <service_name> --image <image_uri> --concurrency-target <integer>Example command to create a service with a concurrency target of 50 requests
$ kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-target 50
3.3.2. Configuring a hard concurrency limit Link kopierenLink in die Zwischenablage kopiert!
A hard concurrency limit sets a strict upper bound on the number of requests. When concurrency reaches this limit, the system buffers additional requests until capacity becomes available. You can specify a hard concurrency limit for your Knative service by modifying the containerConcurrency spec, or by using the kn service command with the correct flags.
Procedure
Optional: Set the
containerConcurrencyspec for your Knative service in the spec of theServicecustom resource:Example service spec
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: showcase namespace: default spec: template: spec: containerConcurrency: 50The default value is
0, which means that the system does not limit the number of simultaneous requests that can reach a single service replica.A value greater than
0sets the exact number of requests that can reach a single service replica at one time. In this example, the system limits concurrency to 50 requests.Optional: Use the
kn servicecommand to specify the--concurrency-limitflag:$ kn service create <service_name> --image <image_uri> --concurrency-limit <integer>Example command to create a service with a concurrency limit of 50 requests
$ kn service create showcase --image quay.io/openshift-knative/showcase --concurrency-limit 50
3.3.3. Concurrency target utilization Link kopierenLink in die Zwischenablage kopiert!
This value specifies the percentage of the concurrency limit that is actually targeted by the autoscaler. This is also known as specifying the hotness at which a replica runs, which enables the autoscaler to scale up before the defined hard limit is reached.
For example, if the containerConcurrency value is set to 10, and the target-utilization-percentage value is set to 70 percent, the autoscaler creates a new replica when the average number of concurrent requests across all existing replicas reaches 7. Requests numbered 7 to 10 are still sent to the existing replicas, but additional replicas are started in anticipation of being required after the containerConcurrency value is reached.
Example service configured using the target-utilization-percentage annotation
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: showcase
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/target-utilization-percentage: "70"
...
3.4. Scale-to-zero Link kopierenLink in die Zwischenablage kopiert!
Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand.
3.4.1. Enabling scale-to-zero Link kopierenLink in die Zwischenablage kopiert!
You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster.
Prerequisites
- You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.
Procedure
Change the
enable-scale-to-zerospec in theKnativeServingcustom resource (CR):Example
KnativeServingCRapiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: "false"1 - 1
- The
enable-scale-to-zerospec accepts eithertrueorfalse. When you set the value totrue, the system enablesscale-to-zero. When you set the value tofalse, the system scales applications down to the configured minimum scale bound. The default value istrue.
3.4.2. Configuring the scale-to-zero grace period Link kopierenLink in die Zwischenablage kopiert!
Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed.
Prerequisites
- You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
- You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated.
- You are using the default Knative Pod Autoscaler. The scale-to-zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.
Procedure
Modify the
scale-to-zero-grace-periodspec in theKnativeServingcustom resource (CR):Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: "30s"1 - 1
- The grace period time in seconds. The default value is 30 seconds.