Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Configuring memory on Compute nodes

download PDF

As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).

Use the following features to tune your instances for optimal memory performance:

  • Overallocation: Tune the virtual RAM to physical RAM allocation ratio.
  • Swap: Tune the allocated swap size to handle memory overcommit.
  • Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages).
  • File-backed memory: Use to expand your Compute node memory capacity.
  • SEV: Use to enable your cloud users to create instances that use memory encryption.

5.1. Configuring memory for overallocation

When you use memory overcommit (NovaRAMAllocationRatio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio.

Note

If your NovaRAMAllocationRatio parameter is set to < 1, follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide.

Prerequisites

Procedure

  1. Copy the /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml file to your environment file directory:

    $ cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yaml
  2. Configure the swap size by adding the following parameters to your enable-swap.yaml file:

    parameter_defaults:
      swap_size_megabytes: <swap size in MB>
      swap_path: <full path to location of swap, default: /swap>
  3. Add the enable_swap.yaml environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/enable-swap.yaml

5.2. Calculating reserved host memory on Compute nodes

To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following:

  • The resources that run on the host, for example, OSD consumes 3 GB of memory.
  • The emulator overhead required to host instances.
  • The hypervisor for each instance.

After you calculate the additional demands on memory, use the following formula to help you determine the amount of memory to reserve for host processes on each node:

NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))
  • Replace vm_no with the number of instances.
  • Replace avg_instance_size with the average amount of memory each instance can use.
  • Replace overhead with the hypervisor overhead required for each instance.
  • Replace resource1 and all resources up to <resourcen> with the number of a resource type on the node.
  • Replace resource_ram with the amount of RAM each resource of this type requires.

5.3. Calculating swap size

The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires:

  • overcommit_ratio = NovaRAMAllocationRatio - 1
  • Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap
  • Recommended (maximum) swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap)

The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services.

For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and NovaRAMAllocationRatio set to 1:

  • Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB

For information about how to calculate the NovaReservedHostMemory value, see Calculating reserved host memory on Compute nodes.

For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide.

5.4. Configuring huge pages on Compute nodes

As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages.

Note

Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested.

Procedure

  1. Open your Compute environment file.
  2. Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances:

    parameter_defaults:
      ComputeParameters:
        NovaReservedHugePages: ["node:0,size:1GB,count:1","node:1,size:1GB,count:1"]
    • Replace the size value for each node with the size of the allocated huge page. Set to one of the following valid values:

      • 2048 (for 2MB)
      • 1GB
    • Replace the count value for each node with the number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2.
  3. Configure huge pages on the Compute nodes:

    parameter_defaults:
      ComputeParameters:
        ...
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32"
    Note

    If you configure multiple huge page sizes, you must also mount the huge page folders during first boot. For more information, see Mounting multiple huge page folders during first boot.

  4. Optional: To allow instances to allocate 1GB huge pages, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags, to include pdpe1gb:

    parameter_defaults:
      ComputeParameters:
        NovaLibvirtCPUMode: 'custom'
        NovaLibvirtCPUModels: 'Haswell-noTSX'
        NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'
    Note
    • CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages.
    • You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation.
    • You only need to set NovaLibvirtCPUModelExtraFlags to pdpe1gb when NovaLibvirtCPUMode is set to host-model or custom.
    • If the host supports pdpe1gb, and host-passthrough is used as the NovaLibvirtCPUMode, then you do not need to set pdpe1gb as a NovaLibvirtCPUModelExtraFlags. The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU.
    • To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws.
  5. To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags, to include +pcid:

    parameter_defaults:
      ComputeParameters:
        NovaLibvirtCPUMode: 'custom'
        NovaLibvirtCPUModels: 'Haswell-noTSX'
        NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'
    Tip
  6. Add NUMATopologyFilter to the NovaSchedulerEnabledFilters parameter, if not already present.
  7. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files]  \
      -e /home/stack/templates/<compute_environment_file>.yaml

5.4.1. Creating a huge pages flavor for instances

To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances.

Prerequisites

Procedure

  1. Create a flavor for instances that require huge pages:

    $ openstack flavor create --ram <size_mb> --disk <size_gb> \
     --vcpus <no_reserved_vcpus> huge_pages
  2. To request huge pages, set the hw:mem_page_size property of the flavor to the required size:

    $ openstack flavor set huge_pages --property hw:mem_page_size=<page_size>
    • Replace <page_size> with one of the following valid values:

      • large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems.
      • small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages).
      • any: Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver.
      • <pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
  3. To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance:

    $ openstack server create --flavor huge_pages \
     --image <image> huge_pages_instance

    The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error.

5.4.2. Mounting multiple huge page folders during first boot

You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. The first boot process adds the heat template configuration to all nodes the first time you boot the nodes. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts.

Procedure

  1. Create a first boot template file, hugepages.yaml, that runs a script to create the mounts for the huge page folders. You can use the OS::TripleO::MultipartMime resource type to send the configuration script:

    heat_template_version: <version>
    
    description: >
      Huge pages configuration
    
    resources:
      userdata:
        type: OS::Heat::MultipartMime
        properties:
          parts:
          - config: {get_resource: hugepages_config}
    
      hugepages_config:
        type: OS::Heat::SoftwareConfig
        properties:
          config: |
            #!/bin/bash
            hostname | grep -qiE 'co?mp' || exit 0
            systemctl mask dev-hugepages.mount || true
            for pagesize in 2M 1G;do
              if ! [ -d "/dev/hugepages${pagesize}" ]; then
                mkdir -p "/dev/hugepages${pagesize}"
                cat << EOF > /etc/systemd/system/dev-hugepages${pagesize}.mount
            [Unit]
            Description=${pagesize} Huge Pages File System
            Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
            Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
            DefaultDependencies=no
            Before=sysinit.target
            ConditionPathExists=/sys/kernel/mm/hugepages
            ConditionCapability=CAP_SYS_ADMIN
            ConditionVirtualization=!private-users
    
            [Mount]
            What=hugetlbfs
            Where=/dev/hugepages${pagesize}
            Type=hugetlbfs
            Options=pagesize=${pagesize}
    
            [Install]
            WantedBy = sysinit.target
            EOF
              fi
            done
            systemctl daemon-reload
            for pagesize in 2M 1G;do
              systemctl enable --now dev-hugepages${pagesize}.mount
            done
    
    outputs:
      OS::stack_id:
        value: {get_resource: userdata}

    The config script in this template performs the following tasks:

    1. Filters the hosts to create the mounts for the huge page folders on, by specifying hostnames that match 'co?mp'. You can update the filter grep pattern for specific computes as required.
    2. Masks the default dev-hugepages.mount systemd unit file to enable new mounts to be created using the page size.
    3. Ensures that the folders are created first.
    4. Creates systemd mount units for each pagesize.
    5. Runs systemd daemon-reload after the first loop, to include the newly created unit files.
    6. Enables each mount for 2M and 1G pagesizes. You can update this loop to include additional pagesizes, as required.
  2. Optional: The /dev folder is automatically bind mounted to the nova_compute and nova_libvirt containers. If you have used a different destination for the huge page mounts, then you need to pass the mounts to the nova_compute and nova_libvirt containers:

    parameter_defaults
      NovaComputeOptVolumes:
        - /opt/dev:/opt/dev
      NovaLibvirtOptVolumes:
        - /opt/dev:/opt/dev
  3. Register your heat template as the OS::TripleO::NodeUserData resource type in your ~/templates/firstboot.yaml environment file:

    resource_registry:
      OS::TripleO::NodeUserData: ./hugepages.yaml
    Important

    You can only register the NodeUserData resources to one heat template for each resource. Subsequent usage overrides the heat template to use.

  4. Add your first boot environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/firstboot.yaml \
      ...

5.5. Configuring Compute nodes to use file-backed memory for instances

You can use file-backed memory to expand your Compute node memory capacity, by allocating files within the libvirt memory backing directory as instance memory. You can configure the amount of host disk that is available for instance memory, and the location on the disk of the instance memory files.

The Compute service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory.

To use file-backed memory for instances, you must enable file-backed memory on the Compute node.

Limitations

  • You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled.
  • File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled.
  • File-backed memory is not compatible with memory overcommit.
  • You cannot reserve memory for host processes using NovaReservedHostMemory. When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory.

Prerequisites

  • NovaRAMAllocationRatio must be set to "1.0" on the node and any host aggregate the node is added to.
  • NovaReservedHostMemory must be set to "0".

Procedure

  1. Open your Compute environment file.
  2. Configure the amount of host disk space, in MiB, to make available for instance RAM, by adding the following parameter to your Compute environment file:

    parameter_defaults:
      NovaLibvirtFileBackedMemory: 102400
  3. Optional: To configure the directory to store the memory backing files, set the QemuMemoryBackingDir parameter in your Compute environment file. If not set, the memory backing directory defaults to /var/lib/libvirt/qemu/ram/.

    Note

    You must locate your backing store in a directory at or above the default directory location, /var/lib/libvirt/qemu/ram/.

    You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk.

  4. Save the updates to your Compute environment file.
  5. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/<compute_environment_file>.yaml

5.5.1. Changing the memory backing directory host disk

You can move the memory backing directory from the default primary disk location to an alternative disk.

Procedure

  1. Create a file system on the alternative backing device. For example, enter the following command to create an ext4 filesystem on /dev/sdb:

    # mkfs.ext4 /dev/sdb
  2. Mount the backing device. For example, enter the following command to mount /dev/sdb on the default libvirt memory backing directory:

    # mount /dev/sdb /var/lib/libvirt/qemu/ram
    Note

    The mount point must match the value of the QemuMemoryBackingDir parameter.

5.6. Configuring AMD SEV Compute nodes to provide memory encryption for instances

As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled.

This feature is available to use from the 2nd Gen AMD EPYC™ 7002 Series ("Rome").

To enable your cloud users to create instances that use memory encryption, you must perform the following tasks:

  1. Designate the AMD SEV Compute nodes for memory encryption.
  2. Configure the Compute nodes for memory encryption.
  3. Deploy the overcloud.
  4. Create a flavor or image for launching instances with memory encryption.
Tip

If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates.

5.6.1. Secure Encrypted Virtualization (SEV)

Secure Encrypted Virtualization (SEV), provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key.

SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised.

For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation.

Limitations of instances with memory encryption

  • You cannot live migrate, or suspend and resume instances with memory encryption.
  • You cannot use PCI passthrough to directly access devices on instances with memory encryption.
  • You cannot use virtio-blk as the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0).

    Note

    You can use virtio-scsi or SATA as the boot disk, or virtio-blk for non-boot disks.

  • The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8.
  • Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYC™ 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYC™ 7002 Series ("Rome") the limit is 255.
  • Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption.
  • You cannot use memory encryption with instances that have multiple NUMA nodes.

5.6.2. Designating AMD SEV Compute nodes for memory encryption

To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new role file to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption.

Note

The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file:

    [stack@director ~]$ source ~/stackrc
  3. Generate a new roles data file that includes the ComputeAMDSEV role, along with any other roles that you need for the overcloud. The following example generates the roles data file roles_data_amd_sev.yaml, which includes the roles Controller and ComputeAMDSEV:

    (undercloud)$ openstack overcloud roles \
     generate -o /home/stack/templates/roles_data_amd_sev.yaml \
     Compute:ComputeAMDSEV Controller
  4. Open roles_data_amd_sev.yaml and edit or add the following parameters and sections:

    Section/ParameterCurrent valueNew value

    Role comment

    Role: Compute

    Role: ComputeAMDSEV

    Role name

    name: Compute

    name: ComputeAMDSEV

    description

    Basic Compute Node role

    AMD SEV Compute Node role

    HostnameFormatDefault

    %stackname%-novacompute-%index%

    %stackname%-novacomputeamdsev-%index%

    deprecated_nic_config_name

    compute.yaml

    compute-amd-sev.yaml

  5. Register the AMD SEV Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide.
  6. Inspect the node hardware:

    (undercloud)$ openstack overcloud node introspect \
     --all-manageable --provide

    For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.

  7. Tag each bare metal node that you want to designate for memory encryption with a custom AMD SEV resource class:

    (undercloud)$ openstack baremetal node set \
     --resource-class baremetal.AMD-SEV <node>

    Replace <node> with the name or ID of the bare metal node.

  8. Add the ComputeAMDSEV role to your node definition file, overcloud-baremetal-deploy.yaml, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:

    - name: Controller
      count: 3
    - name: Compute
      count: 3
    - name: ComputeAMDSEV
      count: 1
      defaults:
        resource_class: baremetal.AMD-SEV
        network_config:
          template: /home/stack/templates/nic-config/myRoleTopology.j2 1
    1
    You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used.

    For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes. For an example node definition file, see Example node definition file.

  9. Run the provisioning command to provision the new nodes for your role:

    (undercloud)$ openstack overcloud node provision \
    --stack <stack> \
    [--network-config \]
    --output /home/stack/templates/overcloud-baremetal-deployed.yaml \
    /home/stack/templates/overcloud-baremetal-deploy.yaml
    • Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud.
    • Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used.
  10. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active:

    (undercloud)$ watch openstack baremetal node list
  11. If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files:

    parameter_defaults:
       ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2
       ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<amd_sev_net_top>.j2
       ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2

    Replace <amd_sev_net_top> with the name of the file that contains the network topology of the ComputeAMDSEV role, for example, compute.yaml to use the default network topology.

5.6.3. Configuring AMD SEV Compute nodes for memory encryption

To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware.

Note

From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.

Prerequisites

  • Your deployment must include a Compute node that runs on AMD hardware capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable:

    $ lscpu | grep sev

Procedure

  1. Open your Compute environment file.
  2. Optional: Add the following configuration to your Compute environment file to specify the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently:

    parameter_defaults:
      ComputeAMDSEVExtraConfig:
        nova::config::nova_config:
          libvirt/num_memory_encrypted_guests:
            value: 15
    Note

    The default value of the libvirt/num_memory_encrypted_guests parameter is none. If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch.

  3. Optional: To specify that all x86_64 images use the q35 machine type by default, add the following configuration to your Compute environment file:

    parameter_defaults:
      ComputeAMDSEVParameters:
        NovaHWMachineType: x86_64=q35

    If you specify this parameter value, you do not need to set the hw_machine_type property to q35 on every AMD SEV instance image.

  4. To ensure that the AMD SEV Compute nodes reserve enough memory for host-level services to function, add 16MB for each potential AMD SEV instance:

    parameter_defaults:
      ComputeAMDSEVParameters:
        ...
        NovaReservedHostMemory: <libvirt/num_memory_encrypted_guests * 16>
  5. Configure the kernel parameters for the AMD SEV Compute nodes:

    parameter_defaults:
      ComputeAMDSEVParameters:
        ...
        KernelArgs: "hugepagesz=1GB hugepages=32 default_hugepagesz=1GB mem_encrypt=on kvm_amd.sev=1"
    Note

    When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs.

  6. Save the updates to your Compute environment file.
  7. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
     -e [your environment files] \
     -r /home/stack/templates/roles_data_amd_sev.yaml \
     -e /home/stack/templates/network-environment.yaml \
     -e /home/stack/templates/<compute_environment_file>.yaml \
     -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
     -e /home/stack/templates/node-info.yaml

5.6.4. Creating an image for memory encryption

When the overcloud contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption.

Note

From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.

Procedure

  1. Create a new image for memory encryption:

    (overcloud)$ openstack image create ...  \
     --property hw_firmware_type=uefi amd-sev-image
    Note

    If you use an existing image, the image must have the hw_firmware_type property set to uefi.

  2. Optional: Add the property hw_mem_encryption=True to the image to enable AMD SEV memory encryption on the image:

    (overcloud)$ openstack image set  \
     --property hw_mem_encryption=True amd-sev-image
    Tip

    You can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption.

  3. Optional: Set the machine type to q35, if not already set in the Compute node configuration:

    (overcloud)$ openstack image set  \
     --property hw_machine_type=q35 amd-sev-image
  4. Optional: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the image extra specs:

    (overcloud)$ openstack image set  \
     --property trait:HW_CPU_X86_AMD_SEV=required amd-sev-image
    Tip

    You can also specify this trait on the flavor. For more information, see Creating a flavor for memory encryption.

5.6.5. Creating a flavor for memory encryption

When the overcloud contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption.

Note

An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image.

Procedure

  1. Create a flavor for memory encryption:

    (overcloud)$ openstack flavor create --vcpus 1 --ram 512 --disk 2  \
     --property hw:mem_encryption=True m1.small-amd-sev
  2. To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the flavor extra specs:

    (overcloud)$ openstack flavor set  \
     --property trait:HW_CPU_X86_AMD_SEV=required m1.small-amd-sev

5.6.6. Launching an instance with memory encryption

To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance.

Procedure

  1. Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption:

    (overcloud)$ openstack server create --flavor m1.small-amd-sev \
     --image amd-sev-image amd-sev-instance
  2. Log in to the instance as a cloud user.
  3. To verify that the instance uses memory encryption, enter the following command from the instance:

    $ dmesg | grep -i sev
    AMD Secure Encrypted Virtualization (SEV) active
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.