Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Using the heat service for autoscaling


After you deploy the services required to provide autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage instances for autoscaling.

Prerequisites

3.1. Creating the generic archive policy for autoscaling

After you deploy the services for autoscaling in the overcloud, you must configure the overcloud environment so that the Orchestration service (heat) can manage the instances for autoscaling.

Prerequisites

Procedure

  1. Log in to your environment as the stack user.
  2. For standalone environments, set the OS_CLOUD environment variable:

    [stack@standalone ~]$ export OS_CLOUD=standalone
    Copy to Clipboard Toggle word wrap
  3. For director environments source the stackrc file:

    [stack@undercloud ~]$ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  4. Create the archive policy defined in $HOME/templates/autoscaling/parameters-autoscaling.yaml:

    $ openstack metric archive-policy create generic \
      --back-window 0 \
      --definition timespan:'4:00:00',granularity:'0:01:00',points:240 \
      --aggregation-method 'rate:mean' \
      --aggregation-method 'mean'
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the archive policy was created:

    $ openstack metric archive-policy show generic
    +---------------------+--------------------------------------------------------+
    | Field               | Value                                                  |
    +---------------------+--------------------------------------------------------+
    | aggregation_methods | mean, rate:mean                                        |
    | back_window         | 0                                                      |
    | definition          | - timespan: 4:00:00, granularity: 0:01:00, points: 240 |
    | name                | generic                                                |
    +---------------------+--------------------------------------------------------+
    Copy to Clipboard Toggle word wrap

3.2. Configuring a heat template for automatically scaling instances

You can configure an Orchestration service (heat) template to create the instances, and configure alarms that create and scale instances when triggered.

Note

This procedure uses example values that you must change to suit your environment.

Prerequisites

Procedure

  1. Log in to your environment as the stack user.

    $ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  2. Create a directory to hold the instance configuration for the autoscaling group:

    $ mkdir -p $HOME/templates/autoscaling/vnf/
    Copy to Clipboard Toggle word wrap
  3. Create an instance configuration template, for example, $HOME/templates/autoscaling/vnf/instance.yaml.
  4. Add the following configuration to your instance.yaml file:

    cat <<EOF > $HOME/templates/autoscaling/vnf/instance.yaml
    heat_template_version: wallaby
    description: Template to control scaling of VNF instance
    
    parameters:
      metadata:
        type: json
      image:
        type: string
        description: image used to create instance
        default: fedora36
      flavor:
        type: string
        description: instance flavor to be used
        default: m1.small
      key_name:
        type: string
        description: keypair to be used
        default: default
      network:
        type: string
        description: project network to attach instance to
        default: private
      external_network:
        type: string
        description: network used for floating IPs
        default: public
    
    resources:
      vnf:
        type: OS::Nova::Server
        properties:
          flavor: {get_param: flavor}
          key_name: {get_param: key_name}
          image: { get_param: image }
          metadata: { get_param: metadata }
          networks:
            - port: { get_resource: port }
    
      port:
        type: OS::Neutron::Port
        properties:
          network: {get_param: network}
          security_groups:
            - basic
    
      floating_ip:
        type: OS::Neutron::FloatingIP
        properties:
          floating_network: {get_param: external_network }
    
      floating_ip_assoc:
        type: OS::Neutron::FloatingIPAssociation
        properties:
          floatingip_id: { get_resource: floating_ip }
          port_id: { get_resource: port }
    EOF
    Copy to Clipboard Toggle word wrap
    • The parameters parameter defines the custom parameters for this new resource.
    • The vnf sub-parameter of the resources parameter defines the name of the custom sub-resource referred to in the OS::Heat::AutoScalingGroup, for example, OS::Nova::Server::VNF.
  5. Create the resource to reference in the heat template:

    $ cat <<EOF > $HOME/templates/autoscaling/vnf/resources.yaml
    resource_registry:
      "OS::Nova::Server::VNF": $HOME/templates/autoscaling/vnf/instance.yaml
    EOF
    Copy to Clipboard Toggle word wrap
  6. Create the deployment template for heat to control instance scaling:

    $ cat <<EOF > $HOME/templates/autoscaling/vnf/template.yaml
    heat_template_version: wallaby
    description:  Example auto scale group, policy and alarm
    resources:
      scaleup_group:
        type: OS::Heat::AutoScalingGroup
        properties:
          max_size: 3
          min_size: 1
          #desired_capacity: 1
          resource:
            type: OS::Nova::Server::VNF
            properties:
              metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
    
      scaleup_policy:
        type: OS::Heat::ScalingPolicy
        properties:
          adjustment_type: change_in_capacity
          auto_scaling_group_id: { get_resource: scaleup_group }
          cooldown: 60
          scaling_adjustment: 1
    
      scaledown_policy:
        type: OS::Heat::ScalingPolicy
        properties:
          adjustment_type: change_in_capacity
          auto_scaling_group_id: { get_resource: scaleup_group }
          cooldown: 60
          scaling_adjustment: -1
    
      cpu_alarm_high:
        type: OS::Aodh::GnocchiAggregationByResourcesAlarm
        properties:
          description: Scale up instance if CPU > 50%
          metric: cpu
          aggregation_method: rate:mean
          granularity: 60
          evaluation_periods: 3
          threshold: 60000000000.0
          resource_type: instance
          comparison_operator: gt
          alarm_actions:
            - str_replace:
                template: trust+url
                params:
                  url: {get_attr: [scaleup_policy, signal_url]}
          query:
            list_join:
              - ''
              - - {'=': {server_group: {get_param: "OS::stack_id"}}}
    
      cpu_alarm_low:
        type: OS::Aodh::GnocchiAggregationByResourcesAlarm
        properties:
          description: Scale down instance if CPU < 20%
          metric: cpu
          aggregation_method: rate:mean
          granularity: 60
          evaluation_periods: 3
          threshold: 24000000000.0
          resource_type: instance
          comparison_operator: lt
          alarm_actions:
            - str_replace:
                template: trust+url
                params:
                  url: {get_attr: [scaledown_policy, signal_url]}
          query:
            list_join:
              - ''
              - - {'=': {server_group: {get_param: "OS::stack_id"}}}
    
    outputs:
      scaleup_policy_signal_url:
        value: {get_attr: [scaleup_policy, alarm_url]}
    
      scaledown_policy_signal_url:
        value: {get_attr: [scaledown_policy, alarm_url]}
    EOF
    Copy to Clipboard Toggle word wrap
    Note

    Outputs on the stack are informational and are not referenced in the ScalingPolicy or AutoScalingGroup. To view the outputs, use the openstack stack show <stack_name> command.

3.3. Preparing the standalone deployment for autoscaling

To test the deployment of a stack for an autoscaled instance in a pre-production environment, you can deploy the stack by using a standalone deployment. You can use this procedure to test the deployment with a standalone environment. In a production environment, the deployment commands are different.

Procedure

  1. Log in to your environment as the stack user.
  2. Set the OS_CLOUD environment variable:

    [stack@standalone ~]$ export OS_CLOUD=standalone
    Copy to Clipboard Toggle word wrap
  3. Configure the cloud to allow deployment of a simulated VNF workload that uses the Fedora 36 cloud image with attached private and public network interfaces. This example is a working configuration that uses a standalone deployment:

    $ export GATEWAY=192.168.25.1
    $ export STANDALONE_HOST=192.168.25.2
    $ export PUBLIC_NETWORK_CIDR=192.168.25.0/24
    $ export PRIVATE_NETWORK_CIDR=192.168.100.0/24
    $ export PUBLIC_NET_START=192.168.25.3
    $ export PUBLIC_NET_END=192.168.25.254
    $ export DNS_SERVER=1.1.1.1
    Copy to Clipboard Toggle word wrap
  4. Create the flavor:

    $ openstack flavor create --ram 2048 --disk 10 --vcpu 2 --public m1.small
    Copy to Clipboard Toggle word wrap
  5. Download and import the Fedora 36 x86_64 cloud image:

    $ curl -L 'https://download.fedoraproject.org/pub/fedora/linux/releases/36/Cloud/x86_64/images/Fedora-Cloud-Base-36-1.5.x86_64.qcow2' -o $HOME/fedora36.qcow2
    Copy to Clipboard Toggle word wrap
    $ openstack image create fedora36 --container-format bare --disk-format qcow2 --public --file $HOME/fedora36.qcow2
    Copy to Clipboard Toggle word wrap
  6. Generate and import the public key:

    $ ssh-keygen -f $HOME/.ssh/id_rsa -q -N "" -t rsa -b 2048
    Copy to Clipboard Toggle word wrap
    $ openstack keypair create --public-key $HOME/.ssh/id_rsa.pub default
    Copy to Clipboard Toggle word wrap
  7. Create the basic security group that allows SSH, ICMP, and DNS protocols:

    $ openstack security group create basic
    Copy to Clipboard Toggle word wrap
    $ openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
    Copy to Clipboard Toggle word wrap
    $ openstack security group rule create --protocol icmp basic
    Copy to Clipboard Toggle word wrap
    $ openstack security group rule create --protocol udp --dst-port 53:53 basic
    Copy to Clipboard Toggle word wrap
  8. Create the external network (public):

    $ openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
    Copy to Clipboard Toggle word wrap
  9. Create the private network:

    $ openstack network create --internal private
    Copy to Clipboard Toggle word wrap
    openstack subnet create public-net \
      --subnet-range $PUBLIC_NETWORK_CIDR \
      --no-dhcp \
      --gateway $GATEWAY \
      --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
      --network public
    Copy to Clipboard Toggle word wrap
    $ openstack subnet create private-net \
      --subnet-range $PRIVATE_NETWORK_CIDR \
      --network private
    Copy to Clipboard Toggle word wrap
  10. Create the router:

    $ openstack router create vrouter
    Copy to Clipboard Toggle word wrap
    $ openstack router set vrouter --external-gateway public
    Copy to Clipboard Toggle word wrap
    $ openstack router add subnet vrouter private-net
    Copy to Clipboard Toggle word wrap

Additional resources

3.4. Creating the stack deployment for autoscaling

Create the stack deployment for the worked VNF autoscaling example.

Procedure

  1. Create the stack:

    $ openstack stack create \
      -t $HOME/templates/autoscaling/vnf/template.yaml \
      -e $HOME/templates/autoscaling/vnf/resources.yaml \
      vnf
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the stack was created successfully:

    $ openstack stack show vnf -c id -c stack_status
    +--------------+--------------------------------------+
    | Field        | Value                                |
    +--------------+--------------------------------------+
    | id           | cb082cbd-535e-4779-84b0-98925e103f5e |
    | stack_status | CREATE_COMPLETE                      |
    +--------------+--------------------------------------+
    Copy to Clipboard Toggle word wrap
  2. Verify that the stack resources were created, including alarms, scaling policies, and the autoscaling group:

    $ export STACK_ID=$(openstack stack show vnf -c id -f value)
    Copy to Clipboard Toggle word wrap
    $ openstack stack resource list $STACK_ID
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    | resource_name    | physical_resource_id                 | resource_type                                | resource_status | updated_time         |
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    | cpu_alarm_high   | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaleup_policy   | 1c4446b7242e479090bef4b8075df9d4     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | cpu_alarm_low    | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    | scaleup_group    | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup                   | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
    +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
    Copy to Clipboard Toggle word wrap
  3. Verify that an instance was launched by the stack creation:

    $ openstack server list --long | grep $STACK_ID
    
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | ACTIVE | None       | Running     | private=192.168.100.61, 192.168.25.99 | fedora36   | a6aa7b11-1b99-4c62-a43b-d0b7c77f4b72 | m1.small    | 5cd46fec-50c2-43d5-89e8-ed3fa7660852 | nova              | standalone-80.localdomain | metering.server_group='cb082cbd-535e-4779-84b0-98925e103f5e' |
    Copy to Clipboard Toggle word wrap
  4. Verify that the alarms were created for the stack:

    1. List the alarm IDs. The state of the alarms might reside in the insufficient data state for a period of time. The minimal period of time is the polling interval of the data collection and data storage granularity setting:

      $ openstack alarm list
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      | alarm_id                             | type                                       | name                            | state | severity | enabled |
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      | b9c04ef4-8b57-4730-af03-1a71c3885914 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_low-pve5eal6ykst  | alarm | low      | True    |
      | d72d2e0d-1888-4f89-b888-02174c48e463 | gnocchi_aggregation_by_resources_threshold | vnf-cpu_alarm_high-5xx7qvfsurxe | ok    | low      | True    |
      +--------------------------------------+--------------------------------------------+---------------------------------+-------+----------+---------+
      Copy to Clipboard Toggle word wrap
    2. List the resources for the stack and note the physical_resource_id values for the cpu_alarm_high and cpu_alarm_low resources.

      $ openstack stack resource list $STACK_ID
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      | resource_name    | physical_resource_id                 | resource_type                                | resource_status | updated_time         |
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      | cpu_alarm_high   | d72d2e0d-1888-4f89-b888-02174c48e463 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaleup_policy   | 1c4446b7242e479090bef4b8075df9d4     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | cpu_alarm_low    | b9c04ef4-8b57-4730-af03-1a71c3885914 | OS::Aodh::GnocchiAggregationByResourcesAlarm | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaledown_policy | a5af7faf5a1344849c3425cb2c5f18db     | OS::Heat::ScalingPolicy                      | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      | scaleup_group    | 9609f208-6d50-4b8f-836e-b0222dc1e0b1 | OS::Heat::AutoScalingGroup                   | CREATE_COMPLETE | 2022-10-06T23:08:37Z |
      +------------------+--------------------------------------+----------------------------------------------+-----------------+----------------------+
      Copy to Clipboard Toggle word wrap

      The value of the physical_resource_id must match the alarm_id in the output of the openstack alarm list command.

  5. Verify that metric resources exist for the stack. Set the value of the server_group query to the stack ID:

    $ openstack metric resource search --sort-column launched_at -c id -c display_name -c launched_at -c deleted_at --type instance server_group="$STACK_ID"
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    | id                                   | display_name                                          | launched_at                      | deleted_at |
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7 | vn-dvaxcqb-6bqh2qd2fpif-hicmkm5dzjug-vnf-ywrydc5wqjjc | 2022-10-06T23:09:28.496566+00:00 | None       |
    +--------------------------------------+-------------------------------------------------------+----------------------------------+------------+
    Copy to Clipboard Toggle word wrap
  6. Verify that measurements exist for the instance resources created through the stack:

    $ openstack metric aggregates --resource-type instance --sort-column timestamp '(metric cpu rate:mean)' server_group="$STACK_ID"
    +----------------------------------------------------+---------------------------+-------------+---------------+
    | name                                               | timestamp                 | granularity |         value |
    +----------------------------------------------------+---------------------------+-------------+---------------+
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:11:00+00:00 |        60.0 | 69470000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:12:00+00:00 |        60.0 | 81060000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:13:00+00:00 |        60.0 | 82840000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:14:00+00:00 |        60.0 | 66660000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:15:00+00:00 |        60.0 |  7360000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:16:00+00:00 |        60.0 |  3150000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:17:00+00:00 |        60.0 |  2760000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:18:00+00:00 |        60.0 |  3470000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:19:00+00:00 |        60.0 |  2770000000.0 |
    | 62e1b27c-8d9d-44a5-a0f0-80e7e6d437c7/cpu/rate:mean | 2022-10-06T23:20:00+00:00 |        60.0 |  2700000000.0 |
    +----------------------------------------------------+---------------------------+-------------+---------------+
    Copy to Clipboard Toggle word wrap
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat