이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Auto Scaling for Compute
configure Auto Scaling in Red Hat OpenStack Platform
Abstract
Chapter 1. Configure Auto Scaling for Compute 링크 복사링크가 클립보드에 복사되었습니다!
This guide describes how to automatically scale out your Compute instances in response to heavy system usage. By using pre-defined rules that consider factors such as CPU or memory usage, you can configure Orchestration (heat) to add and remove additional instances automatically, when they are needed.
1.1. Architectural Overview 링크 복사링크가 클립보드에 복사되었습니다!
1.1.1. Orchestration 링크 복사링크가 클립보드에 복사되었습니다!
The core component providing automatic scaling is Orchestration (heat). Orchestration allows you to define rules using human-readable YAML templates. These rules are applied to evaluate system load based on Telemetry data to find out whether there is need to add more instances into the stack. Once the load has dropped, Orchestration can automatically remove the unused instances again.
1.1.2. Telemetry 링크 복사링크가 클립보드에 복사되었습니다!
Telemetry does performance monitoring of your OpenStack environment, collecting data on CPU, storage, and memory utilization for instances and physical hosts. Orchestration templates examine Telemetry data to assess whether any pre-defined action should start.
1.1.3. Key Terms 링크 복사링크가 클립보드에 복사되었습니다!
- Stack - A stack stands for all the resources necessary to operate an application. It can be as simple as a single instance and its resources, or as complex as multiple instances with all the resource dependencies that comprise a multi-tier application.
Templates - YAML scripts that define a series of tasks for Heat to execute. For example, it is preferable to use separate templates for certain functions:
- Template File - This is where you define thresholds that Telemetry should respond to, and define the auto scaling group.
- Environment File - Defines the build information for your environment: which flavor and image to use, how the virtual network should be configured, and what software should be installed.
1.2. Example: Auto Scaling Based on CPU Usage 링크 복사링크가 클립보드에 복사되었습니다!
In this example, Orchestration examines Telemetry data, and automatically increases the number of instances in response to high CPU usage. A stack template and environment template are created to define the needed rules and subsequent configuration. This example makes use of existing resources (such as networks), and uses names that are likely to differ in your own environment.
Create the environment template, describing the instance flavor, networking configuration, and image type and save it in the template
/home/<user>/stacks/example1/cirros.yamlfile. Please, replace the<user>variable with a real user name:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the Orchestration resource in
~/stacks/example1/environment.yaml:resource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yamlresource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the stack template, describing the CPU thresholds to watch for, and how many instances should be added. An instance group is also created, defining the minimum and maximum number of instances that can participate in this template.
NoteThe
granularityparameter needs to be set according to gnocchicpu_utilmetric granularity. For more information, refer to this solution article.Save the following values in
~/stacks/example1/template.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following OpenStack command to build the environment and deploy the instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration will create the stack and launch a defined minimum number of cirros instances, as defined in the
min_sizeparameter of thescaleup_groupdefinition. Verify that the instances were created successfully:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration also creates two cpu alarms which are used to trigger scale-up or scale-down events, as defined in
cpu_alarm_highandcpu_alarm_low. Verify that the triggers exist:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.1. Test Automatic Scaling Up Instances 링크 복사링크가 클립보드에 복사되었습니다!
Orchestration can scale instances automatically based on the cpu_alarm_high threshold definition. Once the CPU utilization reaches a value defined in the threshold parameter, another instance is started to balance the load. The threshold value in the above template.yaml file is set to 80%.
Login to the instance and run several
ddcommands to generate the load:ssh -i ~/mykey.pem cirros@192.168.122.8 sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null &
$ ssh -i ~/mykey.pem cirros@192.168.122.8 $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null &Copy to Clipboard Copied! Toggle word wrap Toggle overflow Having run the
ddcommands, you can expect to have 100% CPU utilization in the cirros instance. Verify that the alarm has been triggered:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After some time (approximately 60 seconds), Orchestration will start another instance and add it into the group. You can verify this with the
nova listcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After another short period, you will observe that Orchestration has auto scaled again to three instances. The configuration is set to three instances maximally, so it will not scale any higher (the
scaleup_groupdefinition:max_size). Again, you can verify that with the above mentioned command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2. Automatically Scaling Down Instances 링크 복사링크가 클립보드에 복사되었습니다!
Orchestration can also automatically scale down instances based on the cpu_alarm_low threshold. In this example, the instances are scaled down once CPU utilization is below 5%.
Terminate the running
ddprocesses and you will observe Orchestration begin to scale the instances back down.killall dd
$ killall ddCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping the
ddprocesses causes thecpu_alarm_low eventto trigger. As a result, Orchestration begins to automatically scale down and remove the instances. Verify, that the corresponding alarm has been triggered.Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a few minutes, Orchestration continually reduce the number of instances to the minimum value defined in the
min_sizeparameter of thescaleup_groupdefinition. In this scenario, themin_sizeparameter is set to1.
1.2.3. Troubleshooting the setup 링크 복사링크가 클립보드에 복사되었습니다!
If your environment is not working properly, you can look for errors in the log files and history records.
To get information on state transitions, you can list the stack event records:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To read the alarm history log:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To see the records of scale-out or scale-down operations that heat collects for the existing stack, you can use
awkto parse theheat-engine.log:awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/heat/heat-engine.log$ awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/heat/heat-engine.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see the
aodhrelated information, examine theevaluator.log:grep -i alarm /var/log/aodh/evaluator.log | grep -i transition
$ grep -i alarm /var/log/aodh/evaluator.log | grep -i transitionCopy to Clipboard Copied! Toggle word wrap Toggle overflow