Este conteúdo não está disponível no idioma selecionado.
Chapter 2. Configuring auto scaling for Compute instances
You can automatically scale out your Compute instances in response to heavy system use. You can use pre-defined rules that consider factors such as CPU or memory use, and you can configure Orchestration (heat) to add and remove additional instances automatically, when they are needed.
2.1. Overview of auto scaling architecture Copiar o linkLink copiado para a área de transferência!
2.1.1. Orchestration Copiar o linkLink copiado para a área de transferência!
The core component providing auto scaling is Orchestration (heat). Use Orchestration to define rules using human-readable YAML templates. These rules are applied to evaluate system load based on Telemetry data to find out whether you need to add more instances into the stack. When the load drops, Orchestration can automatically remove the unused instances again.
2.1.2. Telemetry Copiar o linkLink copiado para a área de transferência!
Telemetry monitors the performance of your Red Hat OpenStack Platform environment, collecting data on CPU, storage, and memory utilization for instances and physical hosts. Orchestration templates examine Telemetry data to assess whether any pre-defined action should start.
2.1.3. Key terms Copiar o linkLink copiado para a área de transferência!
- Stack
- A collection of resources that are necessary to operate an application. A stack can be as simple as a single instance and its resources, or as complex as multiple instances with all the resource dependencies that comprise a multi-tier application.
- Templates
YAML scripts that define a series of tasks for heat to execute. For example, it is preferable to use separate templates for certain functions:
- Template file: Define thresholds that Telemetry should respond to, and define the auto scaling group.
- Environment file: Define the build information for your environment: which flavor and image to use, how to configure the virtual network, and what software to install.
2.2. Example: Auto scaling based on CPU use Copiar o linkLink copiado para a área de transferência!
In this example, Orchestration examines Telemetry data, and automatically increases the number of instances in response to high CPU use. Create a stack template and environment template to define the rules and subsequent configuration. This example uses existing resources, such as networks, and uses names that might be different to those in your own environment.
Procedure
Create the environment template, describing the instance flavor, networking configuration, and image type. Save the template in the
/home/<user>/stacks/example1/cirros.yaml
file. Replace the<user>
variable with a real user name.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the Orchestration resource in
~/stacks/example1/environment.yaml
:resource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yaml
resource_registry: "OS::Nova::Server::Cirros": ~/stacks/example1/cirros.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the stack template. Describe the CPU thresholds to watch for and how many instances to add. An instance group is also created that defines the minimum and maximum number of instances that can participate in this template.
NoteSet the
granularity
parameter according to Gnocchicpu_util
metric granularity. For more information, see How to create aodh alarms while using gnocchi as ceilometer dispatcher.Save the following values in
~/stacks/example1/template.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter following command to build the environment and deploy the instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration creates the stack and launches a defined minimum number of cirros instances, as defined in the
min_size
parameter of thescaleup_group
definition. Verify that the instances were created successfully:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Orchestration also creates two CPU alarms which can trigger scale-up or scale-down events, as defined in
cpu_alarm_high
andcpu_alarm_low
. Verify that the triggers exist:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.1. Testing automatic scaling up instances Copiar o linkLink copiado para a área de transferência!
Orchestration can scale instances automatically based on the cpu_alarm_high
threshold definition. When CPU use reaches a value defined in the threshold
parameter, another instance starts up to balance the load. The threshold
value in the above template.yaml
file is set to 80%.
Procedure
Log in to the instance and run several
dd
commands to generate the load:ssh -i ~/mykey.pem cirros@192.168.122.8 sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null & sudo dd if=/dev/zero of=/dev/null &
$ ssh -i ~/mykey.pem cirros@192.168.122.8 $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null & $ sudo dd if=/dev/zero of=/dev/null &
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can expect to have 100% CPU utilization in the cirros instance. Verify that the alarm has triggered:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After approximately 60 seconds, Orchestration starts another instance and adds it into the group. To verify this, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After a short period of time, observe that Orchestration has auto scaled again to three instances. The configuration is set to a maximum of three instances, so it cannot scale any higher. Use the following command to verify that Orchestration has auto-scaled again to three instances:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Automatically scaling down instances Copiar o linkLink copiado para a área de transferência!
Orchestration can automatically scale down instances based on the cpu_alarm_low
threshold. In this example, the instances scale down when CPU use drops below 5%.
Procedure
Terminate the running
dd
processes and observe Orchestration begin to scale the instances down:killall dd
$ killall dd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you stop the
dd
processes, thecpu_alarm_low event
triggers. As a result, Orchestration begins to automatically scale down and remove the instances. Verify that the corresponding alarm has triggered:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After several minutes, Orchestration continually reduces the number of instances to the minimum value defined in the
min_size
parameter of thescaleup_group
definition. In this scenario, themin_size
parameter is set to1
.
2.2.3. Troubleshooting the setup Copiar o linkLink copiado para a área de transferência!
If your environment is not working properly, you can look for errors in the log files and history records.
To view information on state transitions, list the stack event records:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To read the alarm history log:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the records of scale-out or scale-down operations that heat collects for the existing stack, use the
awk
command to parse theheat-engine.log
:awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/heat/heat-engine.log
$ awk '/Stack UPDATE started/,/Stack CREATE completed successfully/ {print $0}' /var/log/heat/heat-engine.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view
aodh
-related information, examine theevaluator.log
:grep -i alarm /var/log/aodh/evaluator.log | grep -i transition
$ grep -i alarm /var/log/aodh/evaluator.log | grep -i transition
Copy to Clipboard Copied! Toggle word wrap Toggle overflow