第 4 章 使用编排服务(heat)进行自动扩展
部署提供自动扩展所需的服务后,您必须配置环境,以便编排服务(heat)可以管理自动扩展的实例。
先决条件
- 在 OpenShift (RHOSO)环境中部署了 Red Hat OpenStack Services。
- 控制平面上启用了 Ceilometer、自动扩展、MetricStorage 和编排服务。
4.1. 为自动扩展配置 heat 模板 复制链接链接已复制到粘贴板!
复制链接链接已复制到粘贴板!
流程
您可以配置编排服务(heat)模板来创建实例,并配置触发时创建和扩展实例的警报。此流程使用可能与您环境不同的示例值。
先决条件
- 您已使用自动扩展服务部署了云。
流程
访问
openstackclientpod:oc rsh openstackclient
$ oc rsh openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow 为您的模板创建目录,如
/tmp/templates/:mkdir -p /tmp/templates
$ mkdir -p /tmp/templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
创建实例配置模板,如
/tmp/templates/instance.yaml: 在
instance.yaml文件中添加以下配置:cat <<EOF > /tmp/templates/instance.yaml heat_template_version: wallaby description: Template to control scaling of VNF instance parameters: metadata: type: json image: type: string description: image used to create instance default: cirros flavor: type: string description: instance flavor to be used default: m1.small network: type: string description: project network to attach instance to default: private external_network: type: string description: network used for floating IPs default: public resources: vnf: type: OS::Nova::Server properties: flavor: {get_param: flavor} image: { get_param: image } metadata: { get_param: metadata } networks: - port: { get_resource: port } port: type: OS::Neutron::Port properties: network: {get_param: network} security_groups: - basic floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network } floating_ip_assoc: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: port } EOFcat <<EOF > /tmp/templates/instance.yaml heat_template_version: wallaby description: Template to control scaling of VNF instance parameters: metadata: type: json image: type: string description: image used to create instance default: cirros flavor: type: string description: instance flavor to be used default: m1.small network: type: string description: project network to attach instance to default: private external_network: type: string description: network used for floating IPs default: public resources: vnf: type: OS::Nova::Server properties: flavor: {get_param: flavor} image: { get_param: image } metadata: { get_param: metadata } networks: - port: { get_resource: port } port: type: OS::Neutron::Port properties: network: {get_param: network} security_groups: - basic floating_ip: type: OS::Neutron::FloatingIP properties: floating_network: {get_param: external_network } floating_ip_assoc: type: OS::Neutron::FloatingIPAssociation properties: floatingip_id: { get_resource: floating_ip } port_id: { get_resource: port } EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow 创建要在 heat 模板中引用的资源:
cat <<EOF > /tmp/templates/resources.yaml resource_registry: "OS::Nova::Server::VNF": /tmp/templates/instance.yaml EOF
$ cat <<EOF > /tmp/templates/resources.yaml resource_registry: "OS::Nova::Server::VNF": /tmp/templates/instance.yaml EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow 为编排服务创建部署模板,以控制实例扩展:
cat <<EOF > /tmp/templates/autoscaling.yaml heat_template_version: wallaby description: Example auto scale group, policy and alarm resources: autoscalinggroup: type: OS::Heat::AutoScalingGroup properties: cooldown: 300 desired_capacity: 1 max_size: 3 min_size: 1 resource: # Configure the resource to be autoscaled type: OS::Nova::Server::VNF properties: metadata: {"metering.server_group": {get_param: "OS::stack_id"}} scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: autoscalinggroup } cooldown: 300 scaling_adjustment: 1 scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: autoscalinggroup } cooldown: 300 scaling_adjustment: -1 cpu_alarm_high: type: OS::Aodh::PrometheusAlarm properties: description: Scale up if CPU > 80% threshold: 80 # 80% comparison_operator: gt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaleup_policy, signal_url]} query: str_replace: # ceilometer_cpu metric is in ns. Divide the rate by 10000000 to get percentage # The time duration in [] should be higher than ceilometer polling interval # You can add {ceilometer polling} to {prometheus scrape interval} to calculate the value. # The default value is 150s. template: "(rate(ceilometer_cpu{server_group=~'stack_id'}[150s]))/10000000" params: stack_id: {get_param: OS::stack_id} cpu_alarm_low: type: OS::Aodh::PrometheusAlarm properties: description: Scale up if CPU < 20% threshold: 20 # 20% comparison_operator: lt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaledown_policy, signal_url]} query: str_replace: # ceilometer_cpu metric is in ns. Divide the rate by 10000000 to get percentage # The time duration in [] should be higher than ceilometer polling interval # You can add {ceilometer polling} to {prometheus scrape interval} to calculate the value. # The default value is 150s. template: "(rate(ceilometer_cpu{server_group=~'stack_id'}[150s]))/10000000" params: stack_id: {get_param: OS::stack_id} outputs: scaleup_policy_signal_url: value: {get_attr: [scaleup_policy, signal_url]} scaledown_policy_signal_url: value: {get_attr: [scaledown_policy, signal_url]} EOFcat <<EOF > /tmp/templates/autoscaling.yaml heat_template_version: wallaby description: Example auto scale group, policy and alarm resources: autoscalinggroup: type: OS::Heat::AutoScalingGroup properties: cooldown: 300 desired_capacity: 1 max_size: 3 min_size: 1 resource: # Configure the resource to be autoscaled type: OS::Nova::Server::VNF properties: metadata: {"metering.server_group": {get_param: "OS::stack_id"}} scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: autoscalinggroup } cooldown: 300 scaling_adjustment: 1 scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: { get_resource: autoscalinggroup } cooldown: 300 scaling_adjustment: -1 cpu_alarm_high: type: OS::Aodh::PrometheusAlarm properties: description: Scale up if CPU > 80% threshold: 80 # 80% comparison_operator: gt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaleup_policy, signal_url]} query: str_replace: # ceilometer_cpu metric is in ns. Divide the rate by 10000000 to get percentage # The time duration in [] should be higher than ceilometer polling interval # You can add {ceilometer polling} to {prometheus scrape interval} to calculate the value. # The default value is 150s. template: "(rate(ceilometer_cpu{server_group=~'stack_id'}[150s]))/10000000" params: stack_id: {get_param: OS::stack_id} cpu_alarm_low: type: OS::Aodh::PrometheusAlarm properties: description: Scale up if CPU < 20% threshold: 20 # 20% comparison_operator: lt alarm_actions: - str_replace: template: trust+url params: url: {get_attr: [scaledown_policy, signal_url]} query: str_replace: # ceilometer_cpu metric is in ns. Divide the rate by 10000000 to get percentage # The time duration in [] should be higher than ceilometer polling interval # You can add {ceilometer polling} to {prometheus scrape interval} to calculate the value. # The default value is 150s. template: "(rate(ceilometer_cpu{server_group=~'stack_id'}[150s]))/10000000" params: stack_id: {get_param: OS::stack_id} outputs: scaleup_policy_signal_url: value: {get_attr: [scaleup_policy, signal_url]} scaledown_policy_signal_url: value: {get_attr: [scaledown_policy, signal_url]} EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow