Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Using the heat service for autoscaling


After you deploy the services required to provide autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage instances for autoscaling.

Prerequisites

3.1. Creating the generic archive policy for autoscaling

After you deploy the services for autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage the instances for autoscaling.

Prerequisites

Procedure

  1. Log in to your environment as the stack user.
  2. For director environments source the overcloudrc overcloud credentials file:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  3. Create the archive policy defined in $HOME/templates/autoscaling/parameters-autoscaling.yaml:

    $ openstack metric archive-policy create generic \
      --back-window 0 \
      --definition timespan:'4:00:00',granularity:'0:01:00',points:240 \
      --aggregation-method 'rate:mean' \
      --aggregation-method 'mean'
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the archive policy was created:

    $ openstack metric archive-policy show generic
    +---------------------+--------------------------------------------------------+
    | Field               | Value                                                  |
    +---------------------+--------------------------------------------------------+
    | aggregation_methods | mean, rate:mean                                        |
    | back_window         | 0                                                      |
    | definition          | - timespan: 4:00:00, granularity: 0:01:00, points: 240 |
    | name                | generic                                                |
    +---------------------+--------------------------------------------------------+
    Copy to Clipboard Toggle word wrap

3.2. Configuring a heat template for automatically scaling instances

You can configure an Orchestration service (heat) template to create the instances, and configure alarms that create and scale instances when triggered.

Note

This procedure uses example values that you must change to suit your environment.

Prerequisites

Procedure

  1. Log in to your environment as the stack user.

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. Create a directory to hold the instance configuration for the autoscaling group:

    $ mkdir -p $HOME/templates/autoscaling/vnf/
    Copy to Clipboard Toggle word wrap
  3. Create an instance configuration template, for example, $HOME/templates/autoscaling/vnf/instance.yaml.
  4. Add the following configuration to your instance.yaml file:

    cat <<EOF > $HOME/templates/autoscaling/vnf/instance.yaml
    heat_template_version: wallaby
    description: Template to control scaling of VNF instance
    
    parameters:
      metadata:
        type: json
      image:
        type: string
        description: image used to create instance
        default: fedora36
      flavor:
        type: string
        description: instance flavor to be used
        default: m1.small
      key_name:
        type: string
        description: keypair to be used
        default: default
      network:
        type: string
        description: project network to attach instance to
        default: private
      external_network:
        type: string
        description: network used for floating IPs
        default: public
    
    resources:
      vnf:
        type: OS::Nova::Server
        properties:
          flavor: {get_param: flavor}
          key_name: {get_param: key_name}
          image: { get_param: image }
          metadata: { get_param: metadata }
          networks:
            - port: { get_resource: port }
    
      port:
        type: OS::Neutron::Port
        properties:
          network: {get_param: network}
          security_groups:
            - basic
    
      floating_ip:
        type: OS::Neutron::FloatingIP
        properties:
          floating_network: {get_param: external_network }
    
      floating_ip_assoc:
        type: OS::Neutron::FloatingIPAssociation
        properties:
          floatingip_id: { get_resource: floating_ip }
          port_id: { get_resource: port }
    EOF
    Copy to Clipboard Toggle word wrap
    • The parameters parameter defines the custom parameters for this new resource.
    • The vnf sub-parameter of the resources parameter defines the name of the custom sub-resource referred to in the OS::Heat::AutoScalingGroup, for example, OS::Nova::Server::VNF.
  5. Create the resource to reference in the heat template:

    $ cat <<EOF > $HOME/templates/autoscaling/vnf/resources.yaml
    resource_registry:
      "OS::Nova::Server::VNF": $HOME/templates/autoscaling/vnf/instance.yaml
    EOF
    Copy to Clipboard Toggle word wrap
  6. Create the deployment template for heat to control instance scaling:

    $ cat <<EOF > $HOME/templates/autoscaling/vnf/template.yaml
    heat_template_version: wallaby
    description:  Example auto scale group, policy and alarm
    resources:
      scaleup_group:
        type: OS::Heat::AutoScalingGroup
        properties:
          max_size: 3
          min_size: 1
          #desired_capacity: 1
          resource:
            type: OS::Nova::Server::VNF
            properties:
              metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
    
      scaleup_policy:
        type: OS::Heat::ScalingPolicy
        properties:
          adjustment_type: change_in_capacity
          auto_scaling_group_id: { get_resource: scaleup_group }
          cooldown: 60
          scaling_adjustment: 1
    
      scaledown_policy:
        type: OS::Heat::ScalingPolicy
        properties:
          adjustment_type: change_in_capacity
          auto_scaling_group_id: { get_resource: scaleup_group }
          cooldown: 60
          scaling_adjustment: -1
    
      cpu_alarm_high:
        type: OS::Aodh::GnocchiAggregationByResourcesAlarm
        properties:
          description: Scale up instance if CPU > 50%
          metric: cpu
          aggregation_method: rate:mean
          granularity: 60
          evaluation_periods: 3
          threshold: 60000000000.0
          resource_type: instance
          comparison_operator: gt
          alarm_actions:
            - str_replace:
                template: trust+url
                params:
                  url: {get_attr: [scaleup_policy, signal_url]}
          query:
            list_join:
              - ''
              - - {'=': {server_group: {get_param: "OS::stack_id"}}}
    
      cpu_alarm_low:
        type: OS::Aodh::GnocchiAggregationByResourcesAlarm
        properties:
          description: Scale down instance if CPU < 20%
          metric: cpu
          aggregation_method: rate:mean
          granularity: 60
          evaluation_periods: 3
          threshold: 24000000000.0
          resource_type: instance
          comparison_operator: lt
          alarm_actions:
            - str_replace:
                template: trust+url
                params:
                  url: {get_attr: [scaledown_policy, signal_url]}
          query:
            list_join:
              - ''
              - - {'=': {server_group: {get_param: "OS::stack_id"}}}
    
    outputs:
      scaleup_policy_signal_url:
        value: {get_attr: [scaleup_policy, alarm_url]}
    
      scaledown_policy_signal_url:
        value: {get_attr: [scaledown_policy, alarm_url]}
    EOF
    Copy to Clipboard Toggle word wrap
    Note

    Outputs on the stack are informational and are not referenced in the ScalingPolicy or AutoScalingGroup. To view the outputs, use the openstack stack show <stack_name> command.

3.3. Creating the stack deployment for autoscaling

Create the stack deployment for the worked VNF autoscaling example.

Procedure

  1. Log in to the undercloud host with your overcloud administrator credentials, for example overcloudrc:

    (undercloud)$ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. You must first launch the ephemeral Heat process to use the openstack stack commands:

    (undercloud)$ openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/overcloud/heat-launcher --restore-db
    (undercloud)$ export OS_CLOUD=heat
    Copy to Clipboard Toggle word wrap
  3. Create the stack:

    $ openstack stack create \
      -t $HOME/templates/autoscaling/vnf/template.yaml \
      -e $HOME/templates/autoscaling/vnf/resources.yaml \
      vnf
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the stack was created successfully:

    $ openstack stack show vnf -c id -c stack_status
    +--------------+--------------------------------------+
    | Field        | Value                                |
    +--------------+--------------------------------------+
    | id           | cb082cbd-535e-4779-84b0-98925e103f5e |
    | stack_status | CREATE_COMPLETE                      |
    +--------------+--------------------------------------+
    Copy to Clipboard Toggle word wrap
  2. Verify that the stack resources were created, including alarms, scaling policies, and the autoscaling group:

    $ export STACK_ID=$(openstack stack show vnf -c id -f value)
    Copy to Clipboard Toggle word wrap
    $ openstack stack resource list $STACK_ID
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    | resource_name    | physical_resource_id                 | resource_type                                | resource_status | updated_time         |
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    | cpu_alarm_high   | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaleup_policy   | 1c4446b7242e479090bef4b8075df9d4     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | cpu_alarm_low    | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaleup_group    | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup                   | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    Copy to Clipboard Toggle word wrap
  3. Verify that an instance was launched by the stack creation:

    $ openstack server list --long | grep $STACK_ID
    
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | ACTIVE | None       | Running     | private=192.168.100.61, 192.168.25.99 | fedora36   | a6aa7b11-1b99-4c62-a43b-d0b7c77f4b72 | m1.small    | 5cd46fec-50c2-43d5-89e8-ed3fa7660852 | nova              | host-80.localdomain | metering.server_group='cb082cbd-535e-4779-84b0-98925e103f5e' |
    Copy to Clipboard Toggle word wrap
  4. Verify that the alarms were created for the stack:

    1. List the alarm IDs. The state of the alarms might reside in the insufficient data state for a period of time. The minimal period of time is the polling interval of the data collection and data storage granularity setting:

      $ openstack alarm list
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      | alarm_id                             | type                                       | name                            | state | severity | enabled |
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      | b9c04ef4-8b57-4730-af03-1a71c3885914 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_low-pve5eal6ykst  | alarm | low      | True    |
      | d72d2e0d-1888-4f89-b888-02174c48e463 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_high-5xx7qvfsurxe | ok    | low      | True    |
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      Copy to Clipboard Toggle word wrap
    2. List the resources for the stack and note the physical_resource_id values for the cpu_alarm_high and cpu_alarm_low resources.

      $ openstack stack resource list $STACK_ID
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      | resource_name    | physical_resource_id                 | resource_type                                | resource_status | updated_time         |
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      | cpu_alarm_high   | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaleup_policy   | 1c4446b7242e479090bef4b8075df9d4     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | cpu_alarm_low    | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaleup_group    | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup                   | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      Copy to Clipboard Toggle word wrap

      The value of the physical_resource_id must match the alarm_id in the output of the openstack alarm list command.

  5. Verify that metric resources exist for the stack. Set the value of the server_group query to the stack ID:

    $ openstack metric resource search --sort-column launched_at -c id -c display_name -c launched_at -c deleted_at --type instance server_group="$STACK_ID"
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    | id                                   | display_name                                          | launched_at                      | deleted_at |
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | 2022-10-06T23:09:28.496566+00:00 | None       |
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    Copy to Clipboard Toggle word wrap
  6. Verify that measurements exist for the instance resources created through the stack:

    $ openstack metric aggregates --resource-type instance --sort-column timestamp '(metric cpu rate:mean)' server_group="$STACK_ID"
    +----------------------------------------------------+---------------------------+-------------+---------------+
    | name                                               | timestamp                 | granularity |         value |
    +----------------------------------------------------+---------------------------+-------------+---------------+
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:11:00+00:00 |        60.0 | 69470000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:12:00+00:00 |        60.0 | 81060000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:13:00+00:00 |        60.0 | 82840000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:14:00+00:00 |        60.0 | 66660000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:15:00+00:00 |        60.0 |  7360000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:16:00+00:00 |        60.0 |  3150000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:17:00+00:00 |        60.0 |  2760000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:18:00+00:00 |        60.0 |  3470000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:19:00+00:00 |        60.0 |  2770000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:20:00+00:00 |        60.0 |  2700000000.0 |
    +----------------------------------------------------+---------------------------+-------------+---------------+
    Copy to Clipboard Toggle word wrap
  7. Remove the ephemeral Heat process from the undercloud:

    (undercloud)$ openstack tripleo launch heat --kill
    Copy to Clipboard Toggle word wrap
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat