Auto Scaling for Instances
Configuring Auto Scaling in Red Hat OpenStack Platform
Abstract
Chapter 1. About This Guide Copy linkLink copied to clipboard!
Red Hat is currently reviewing the information and procedures provided in this guide for this release.
This document is based on the Red Hat OpenStack Platform 12 document, available at https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/?version=12.
If you require assistance for the current Red Hat OpenStack Platform release, please contact Red Hat support.
Chapter 2. Configuring auto scaling for Compute instances Copy linkLink copied to clipboard!
Automatically scale out your Compute instances in response to heavy system use. You can add pre-defined rules that consider factors such as CPU or memory use and you can configure Orchestration (heat) to add and remove additional instances automatically, when they are needed.
2.1. Overview of auto scaling architecture Copy linkLink copied to clipboard!
2.1.1. Orchestration Copy linkLink copied to clipboard!
The core component providing auto scaling is Orchestration (heat). Use Orchestration to define rules using human-readable YAML templates. These rules are applied to evaluate system load based on Telemetry data to find out whether you need to add more instances into the stack. When the load drops, Orchestration can automatically remove the unused instances again.
2.1.2. Telemetry Copy linkLink copied to clipboard!
Telemetry monitors the performance of your Red Hat OpenStack Platform environment, collecting data on CPU, storage, and memory utilization for instances and physical hosts. Orchestration templates examine Telemetry data to assess whether any pre-defined action should start.
2.1.3. Key terms Copy linkLink copied to clipboard!
- Stack
- A collection of resources that are necessary to operate an application. A stack can be as simple as a single instance and its resources, or as complex as multiple instances with all the resource dependencies that comprise a multi-tier application.
- Templates
YAML scripts that define a series of tasks for heat to execute. For example, it is preferable to use separate templates for certain functions:
- Template file: Define thresholds that Telemetry should respond to, and define the auto scaling group.
- Environment file: Defines the build information for your environment: which flavor and image to use, how to configure the virtual network, and what software to install.
2.2. Example: Auto scaling based on CPU use Copy linkLink copied to clipboard!
In this example, Orchestration examines Telemetry data, and automatically increases the number of instances in response to high CPU use. Create a stack template and environment template to define the rules and subsequent configuration. This example uses existing resources, such as networks), and uses names that might be different to those in your own environment.
The cpu_util metric was deprecated and removed from Red Hat OpenStack Platform.
Procedure
Create the environment template, describing the instance flavor, networking configuration, and image type. Save the template in the
/home/<user>/stacks/example1/cirros.yamlfile. Replace the<user>variable with a real user name.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the Orchestration resource in
~/stacks/example1/environment.yaml:resource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yamlresource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the stack template. Describe the CPU thresholds to watch for and how many instances to add. An instance group is also created that defines the minimum and maximum number of instances that can participate in this template.
NoteThe
cpu_utilmetric was deprecated and removed from Red Hat OpenStack Platform. To obtain the equivalent functionality, use the cumulativecpumetric and an archive policy that includes therate:meanaggregation method. For example,ceilometer-high-rateandceilometer-low-rate. You must convert the threshold value from % to ns to use thecpumetric for the CPU utilisation alarm. The formula is: time_ns = 1,000,000,000 x {granularity} x {percentage_in_decimal}. For example, for a threshold of 80% with a granularity of 1s, the threshold is 1,000,000,000 x 1 x 0.8 = 800,000,000.0Save the following values in
~/stacks/example1/template.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to build the environment and deploy the instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration creates the stack and launches a defined minimum number of cirros instances, as defined in the
min_sizeparameter of thescaleup_groupdefinition. Verify that the instances were created successfully:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration also creates two cpu alarms which are used to trigger scale-up or scale-down events, as defined in
cpu_alarm_highandcpu_alarm_low. Verify that the triggers exist:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.1. Testing auto scaling up instances Copy linkLink copied to clipboard!
Orchestration can scale instances automatically based on the cpu_alarm_high threshold definition. When the CPU use reaches a value defined in the threshold parameter, another instance starts up to balance the load. The threshold value in the above template.yaml file is set to 80%.
Procedure
Log on to the instance and run several
ddcommands to generate the load:ssh -i ~/mykey.pem cirros@192.168.122.8 sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null &
$ ssh -i ~/mykey.pem cirros@192.168.122.8 $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null &Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can expect to have 100% CPU use in the cirros instance. Verify that the alarm has triggered:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After approximately 60 seconds, Orchestration starts another instance and adds it into the group. To verify this, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After another short period of time, observe that Orchestration has auto scaled again to three instances. The configuration is set to a maximum of three instances, so it cannot scale any higher. Use the following command to verify that Orchestration has auto-scaled again to three instances:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Automatically scaling down instances Copy linkLink copied to clipboard!
Orchestration can automatically scale down instances based on the cpu_alarm_low threshold. In this example, the instances scale down when CPU use is below 5%.
Procedure
Terminate the running
ddprocesses and observe Orchestration begin to scale the instances down:killall dd
$ killall ddCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you stop the
ddprocesses, thecpu_alarm_low eventtriggers. As a result, Orchestration begins to automatically scale down and remove the instances. Verify that the corresponding alarm has triggered:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After several minutes, Orchestration continually reduces the number of instances to the minimum value defined in the
min_sizeparameter of thescaleup_groupdefinition. In this scenario, themin_sizeparameter is set to1.
2.2.3. Troubleshooting the setup Copy linkLink copied to clipboard!
If your environment is not working properly, you can look for errors in the log files and history records.
To view information on state transitions, you can list the stack event records:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To read the alarm history log:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see the records of scale-out or scale-down operations that heat collects for the existing stack, use the
awkcommand to parse theheat-engine.log:awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/containers/heat/heat-engine.log$ awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/containers/heat/heat-engine.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow To view
aodh-related information, examine theevaluator.log:grep -i alarm /var/log/containers/aodh/evaluator.log | grep -i transition
$ grep -i alarm /var/log/containers/aodh/evaluator.log | grep -i transitionCopy to Clipboard Copied! Toggle word wrap Toggle overflow