Chapter 21. Pod Autoscaling


21.1. Overview

A horizontal pod autoscaler, defined by a HorizontalPodAutoscaler object, specifies how the system should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration.

Note

Horizontal pod autoscaling is supported starting in OpenShift Enterprise 3.1.1.

21.2. Requirements for Using Horizontal Pod Autoscalers

In order to use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics.

21.3. Supported Metrics

The following metrics are supported by horizontal pod autoscalers:

Table 21.1. Metrics
MetricDescription

CPU Utilization

Percentage of the requested CPU

21.4. Autoscaling

You can create a horizontal pod autoscaler with the oc autoscale command and specify the minimum and maximum number of pods you want to run, as well as the CPU utilization your pods should target.

After a horizontal pod autoscaler is created, it begins attempting to query Heapster for metrics on the pods. It may take one to two minutes before Heapster obtains the initial metrics.

After metrics are available in Heapster, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The scaling will occur at a regular interval, but it may take one to two minutes before metrics make their way into Heapster.

For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase.

21.5. Creating a Horizontal Pod Autoscaler

Use the oc autoscale command and specify at least the maximum number of pods you want to run at any given time. You can optionally specify the minimum number of pods and the average CPU utilization your pods should target, otherwise those are given default values from the OpenShift Container Platform server.

For example:

$ oc autoscale dc/frontend --min 1 --max 10 --cpu-percent=80
deploymentconfig "frontend" autoscaled

The above example creates a horizontal pod autoscaler with the following definition:

Example 21.1. Horizontal Pod Autoscaler Object Definition

apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
  name: frontend 1
spec:
  scaleRef:
    kind: DeploymentConfig 2
    name: frontend 3
    apiVersion: v1 4
    subresource: scale
  minReplicas: 1 5
  maxReplicas: 10 6
  cpuUtilization:
    targetPercentage: 80 7
1
The name of this horizontal pod autoscaler object
2
The kind of object to scale
3
The name of the object to scale
4
The API version of the object to scale
5
The minimum number of replicas to which to scale down
6
The maximum number of replicas to which to scale up
7
The percentage of the requested CPU that each pod should ideally be using

21.6. Viewing a Horizontal Pod Autoscaler

To view the status of a horizontal pod autoscaler:

$ oc get hpa/frontend
NAME              REFERENCE                                 TARGET    CURRENT   MINPODS        MAXPODS   AGE
frontend          DeploymentConfig/default/frontend/scale   80%       79%       1              10        8d

$ oc describe hpa/frontend
Name:                           frontend
Namespace:                      default
Labels:                         <none>
CreationTimestamp:              Mon, 26 Oct 2015 21:13:47 -0400
Reference:                      DeploymentConfig/default/frontend/scale
Target CPU utilization:         80%
Current CPU utilization:        79%
Min pods:                       1
Max pods:                       10
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.