Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Configuring memory on Compute nodes
As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).
Use the following features to tune your instances for optimal memory performance:
- Overallocation: Tune the virtual RAM to physical RAM allocation ratio.
- Swap: Tune the allocated swap size to handle memory overcommit.
- Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages).
- File-backed memory: Use to expand your Compute node memory capacity.
- SEV: Use to enable your cloud users to create instances that use memory encryption.
5.1. Configuring memory for overallocation Copiar enlaceEnlace copiado en el portapapeles!
When you use memory overcommit (NovaRAMAllocationRatio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio.
If your NovaRAMAllocationRatio parameter is set to < 1, follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide.
Prerequisites
- You have calculated the swap size your node requires. For more information, see Calculating swap size.
Procedure
Copy the
/usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yamlfile to your environment file directory:$ cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yamlConfigure the swap size by adding the following parameters to your
enable-swap.yamlfile:parameter_defaults: swap_size_megabytes: <swap size in MB> swap_path: <full path to location of swap, default: /swap>Add the
enable_swap.yamlenvironment file to the stack with your other environment files and deploy the overcloud:(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/enable-swap.yaml
5.2. Calculating reserved host memory on Compute nodes Copiar enlaceEnlace copiado en el portapapeles!
To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following:
- The resources that run on the host, for example, OSD consumes 3 GB of memory.
- The emulator overhead required to host instances.
- The hypervisor for each instance.
After you calculate the additional demands on memory, use the following formula to help you determine the amount of memory to reserve for host processes on each node:
NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))
-
Replace
vm_nowith the number of instances. -
Replace
avg_instance_sizewith the average amount of memory each instance can use. -
Replace
overheadwith the hypervisor overhead required for each instance. -
Replace
resource1and all resources up to<resourcen>with the number of a resource type on the node. -
Replace
resource_ramwith the amount of RAM each resource of this type requires.
5.3. Calculating swap size Copiar enlaceEnlace copiado en el portapapeles!
The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires:
-
overcommit_ratio =
NovaRAMAllocationRatio- 1 -
Minimum swap size (MB) =
(total_RAM * overcommit_ratio) + RHEL_min_swap -
Recommended (maximum) swap size (MB) =
total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap)
The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services.
For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and NovaRAMAllocationRatio set to 1:
- Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB
For information about how to calculate the NovaReservedHostMemory value, see Calculating reserved host memory on Compute nodes.
For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide.
5.4. Configuring huge pages on Compute nodes Copiar enlaceEnlace copiado en el portapapeles!
As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages.
Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested.
Procedure
- Open your Compute environment file.
Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances:
parameter_defaults: ComputeParameters: NovaReservedHugePages: ["node:0,size:1GB,count:1","node:1,size:1GB,count:1"]Replace the
sizevalue for each node with the size of the allocated huge page. Set to one of the following valid values:- 2048 (for 2MB)
- 1GB
-
Replace the
countvalue for each node with the number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2.
Configure huge pages on the Compute nodes:
parameter_defaults: ComputeParameters: ... KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32"NoteIf you configure multiple huge page sizes, you must also mount the huge page folders during first boot. For more information, see Mounting multiple huge page folders during first boot.
Optional: To allow instances to allocate 1GB huge pages, configure the CPU feature flags,
NovaLibvirtCPUModelExtraFlags, to includepdpe1gb:parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'Note- CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages.
- You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation.
-
You only need to set
NovaLibvirtCPUModelExtraFlagstopdpe1gbwhenNovaLibvirtCPUModeis set tohost-modelorcustom. -
If the host supports
pdpe1gb, andhost-passthroughis used as theNovaLibvirtCPUMode, then you do not need to setpdpe1gbas aNovaLibvirtCPUModelExtraFlags. Thepdpe1gbflag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. - To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws.
To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags,
NovaLibvirtCPUModelExtraFlags, to include+pcid:parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'TipFor more information, see Reducing the performance impact of Meltdown CVE fixes for OpenStack guests with "PCID" CPU feature flag.
-
Add
NUMATopologyFilterto theNovaSchedulerEnabledFiltersparameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
5.4.1. Creating a huge pages flavor for instances Copiar enlaceEnlace copiado en el portapapeles!
To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances.
Prerequisites
- The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes.
Procedure
Create a flavor for instances that require huge pages:
$ openstack flavor create --ram <size_mb> --disk <size_gb> \ --vcpus <no_reserved_vcpus> huge_pagesTo request huge pages, set the
hw:mem_page_sizeproperty of the flavor to the required size:$ openstack flavor set huge_pages --property hw:mem_page_size=<page_size>Replace
<page_size>with one of the following valid values:-
large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. -
small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any: Selects the page size by using thehw_mem_page_sizeset on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. -
<pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
-
To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance:
$ openstack server create --flavor huge_pages \ --image <image> huge_pages_instanceThe Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a
NoValidHosterror.
5.4.2. Mounting multiple huge page folders during first boot Copiar enlaceEnlace copiado en el portapapeles!
You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. The first boot process adds the heat template configuration to all nodes the first time you boot the nodes. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:[stack@director ~]$ source ~/stackrc- Open your Compute environment file.
Set the
KernelArgsparameter to create hugepages of different sizes:parameter_defaults: ComputeParameters: ... KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 hugepagesz=2M hugepages=4096"Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yamlCreate a first ansible playbook file,
hugepages.yaml, that runs a script to create the mount unit files for the huge page folders:- name: Overcloud Node Huge Pages hosts: allovercloud any_errors_fatal: true gather_facts: false pre_tasks: - name: Assert custom_hugepage_pagesizes ansible.builtin.assert: that: - custom_hugepage_pagesizes is defined - custom_hugepage_pagesizes | type_debug == "list" - custom_hugepage_pagesizes | length > 0 - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 tasks: - name: Mask the default dev-hugepages.mount systemd unit become: true ansible.builtin.systemd_service: name: dev-hugepages.mount masked: true - name: Create hugepages folder in /dev become: true ansible.builtin.file: path: "/dev//hugepages{{ item }}" state: directory owner: root group: hugetlbfs mode: '0755' loop: "{{ custom_hugepage_pagesizes }}" - name: Create Huge Page systemd mount unit file become: true ansible.builtin.copy: dest: "/etc/systemd/system/dev-hugepages{{ item }}.mount" content: | [Unit] Description={{ item }} Huge Pages File System Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems DefaultDependencies=no Before=sysinit.target ConditionPathExists=/sys/kernel/mm/hugepages ConditionCapability=CAP_SYS_ADMIN ConditionVirtualization=!private-users [Mount] What=hugetlbfs Where=/dev/hugepages{{ item }} Type=hugetlbfs Options=pagesize={{ item }} [Install] WantedBy = sysinit.target owner: root group: root mode: '0644' loop: "{{ custom_hugepage_pagesizes }}" - name: Enable Huge Page systemd mounts become: true ansible.builtin.systemd_service: name: "dev-hugepages{{ item }}.mount" enabled: true daemon_reload: true loop: "{{ custom_hugepage_pagesizes }}"The ansible playbook performs the following tasks:
-
Masks the default
dev-hugepages.mountmount unit file to enable new mounts to be created using the page size. - Ensures that the folders are created.
- Creates systemd unit files for each pagesize.
-
Enables each mount unit file for the pagesizes defined in the
custom_hugepage_pagesizesvariable.
-
Masks the default
Optional: The
/devfolder is automatically bind-mounted to thenova_computeandnova_libvirtcontainers. If you have used a different destination for the huge page mounts, you pass the mounts to thenova_computeandnova_libvirtcontainers:parameter_defaults: NovaComputeOptVolumes: - /opt/dev:/opt/dev NovaLibvirtOptVolumes: - /opt/dev:/opt/devRegister your ansible playbook in the baremetal deployment definition file,
overcloud-baremetal-deploy.yaml:- name: ComputeA ... ansible_playbooks: - playbook: /home/stack/playbooks/hugepages.yaml extra_vars: custom_hugepage_pagesizes: - '2M' - '1G' - name: ComputeB ... ansible_playbooks: - playbook: /home/stack/playbooks/hugepages.yaml extra_vars: custom_hugepage_pagesizes: - '2M' - '1G'Provision your nodes:
$ (undercloud)$ openstack overcloud node provision \ [--templates <templates_directory> \] --stack <stack> \ --network-config \ --output <deployment_file> \ /home/stack/templates/<node_definition_file>-
Optional: Include the
--templatesoption to use your own templates instead of the default templates located in/usr/share/openstack-tripleo-heat-templates. Replace<templates_directory>with the path to the directory that contains your templates. -
Replace
<stack>with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud. -
Replace
<deployment_file>with the name of the heat environment file to generate for inclusion in the deployment command. -
Replace
<node_definition_file>with the name of your node definition file.
-
Optional: Include the
5.5. Configuring Compute nodes to use file-backed memory for instances Copiar enlaceEnlace copiado en el portapapeles!
You can use file-backed memory to expand your Compute node memory capacity, by allocating files within the libvirt memory backing directory as instance memory. You can configure the amount of host disk that is available for instance memory, and the location on the disk of the instance memory files.
The Compute service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory.
To use file-backed memory for instances, you must enable file-backed memory on the Compute node.
Limitations
- You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled.
- File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled.
- File-backed memory is not compatible with memory overcommit.
-
You cannot reserve memory for host processes using
NovaReservedHostMemory. When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory.
Prerequisites
-
NovaRAMAllocationRatiomust be set to "1.0" on the node and any host aggregate the node is added to. -
NovaReservedHostMemorymust be set to "0".
Procedure
- Open your Compute environment file.
Configure the amount of host disk space, in MiB, to make available for instance RAM, by adding the following parameter to your Compute environment file:
parameter_defaults: NovaLibvirtFileBackedMemory: 102400Optional: To configure the directory to store the memory backing files, set the
QemuMemoryBackingDirparameter in your Compute environment file. If not set, the memory backing directory defaults to/var/lib/libvirt/qemu/ram/.NoteYou must locate your backing store in a directory at or above the default directory location,
/var/lib/libvirt/qemu/ram/.You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk.
When the Compute service is not configured to use Red Hat Ceph Storage as nova’s storage back end and if the memory backing path as defined by
QemuMemoryBackingDiris on the same block device as/var/lib/nova, you must reserve the same amount of host disk space as you allocated inNovaLibvirtFileBackedMemory, for example:parameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_disk: 102400- Save the updates to your Compute environment file.
Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
5.5.1. Changing the memory backing directory host disk Copiar enlaceEnlace copiado en el portapapeles!
You can move the memory backing directory from the default primary disk location to an alternative disk.
Procedure
Create a file system on the alternative backing device. For example, enter the following command to create an
ext4filesystem on/dev/sdb:# mkfs.ext4 /dev/sdbMount the backing device. For example, enter the following command to mount
/dev/sdbon the default libvirt memory backing directory:# mount /dev/sdb /var/lib/libvirt/qemu/ramNoteThe mount point must match the value of the
QemuMemoryBackingDirparameter.
5.6. Configuring AMD SEV Compute nodes to provide memory encryption for instances Copiar enlaceEnlace copiado en el portapapeles!
As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled.
This feature is available to use from the 2nd Gen AMD EPYC™ 7002 Series ("Rome").
To enable your cloud users to create instances that use memory encryption, you must perform the following tasks:
- Designate the AMD SEV Compute nodes for memory encryption.
- Configure the Compute nodes for memory encryption.
- Deploy the overcloud.
- Create a flavor or image for launching instances with memory encryption.
If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates.
5.6.1. Secure Encrypted Virtualization (SEV) Copiar enlaceEnlace copiado en el portapapeles!
Secure Encrypted Virtualization (SEV), provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key.
SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised.
For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation.
Limitations of instances with memory encryption
- You cannot live migrate, or suspend and resume instances with memory encryption.
- You cannot use PCI passthrough to directly access devices on instances with memory encryption.
You cannot use
virtio-blkas the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0).NoteYou can use
virtio-scsiorSATAas the boot disk, orvirtio-blkfor non-boot disks.- The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8.
- Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYC™ 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYC™ 7002 Series ("Rome") the limit is 255.
- Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption.
- You cannot use memory encryption with instances that have multiple NUMA nodes.
5.6.2. Designating AMD SEV Compute nodes for memory encryption Copiar enlaceEnlace copiado en el portapapeles!
To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new role file to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption.
The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile:[stack@director ~]$ source ~/stackrcGenerate a new roles data file that includes the
ComputeAMDSEVrole, along with any other roles that you need for the overcloud. The following example generates the roles data fileroles_data_amd_sev.yaml, which includes the rolesControllerandComputeAMDSEV:(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_amd_sev.yaml \ Compute:ComputeAMDSEV ControllerOpen
roles_data_amd_sev.yamland edit or add the following parameters and sections:Expand Section/Parameter Current value New value Role comment
Role: ComputeRole: ComputeAMDSEVRole name
name: Computename: ComputeAMDSEVdescriptionBasic Compute Node roleAMD SEV Compute Node roleHostnameFormatDefault%stackname%-novacompute-%index%%stackname%-novacomputeamdsev-%index%deprecated_nic_config_namecompute.yamlcompute-amd-sev.yaml-
Register the AMD SEV Compute nodes for the overcloud by adding them to your node definition template,
node.jsonornode.yaml. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware:
(undercloud)$ openstack overcloud node introspect \ --all-manageable --provideFor more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.
Tag each bare metal node that you want to designate for memory encryption with a custom AMD SEV resource class:
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.AMD-SEV <node>Replace
<node>with the name or ID of the bare metal node.Add the
ComputeAMDSEVrole to your node definition file,overcloud-baremetal-deploy.yaml, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: Compute count: 3 - name: ComputeAMDSEV count: 1 defaults: resource_class: baremetal.AMD-SEV network_config: template: /home/stack/templates/nic-config/myRoleTopology.j21 - 1
- You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the
network_configproperty, then the default network definitions are used.
For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes. For an example node definition file, see Example node definition file.
Run the provisioning command to provision the new nodes for your role:
(undercloud)$ openstack overcloud node provision \ --stack <stack> \ [--network-config \] --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml-
Replace
<stack>with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud. -
Include the
--network-configoptional argument to provide the network definitions to thecli-overcloud-node-network-config.yamlAnsible playbook. If you do not define the network definitions by using thenetwork_configproperty, then the default network definitions are used.
-
Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
availabletoactive:(undercloud)$ watch openstack baremetal node listIf you did not run the provisioning command with the
--network-configoption, then configure the<Role>NetworkConfigTemplateparameters in yournetwork-environment.yamlfile to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<amd_sev_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2Replace
<amd_sev_net_top>with the name of the file that contains the network topology of theComputeAMDSEVrole, for example,compute.yamlto use the default network topology.
5.6.3. Configuring AMD SEV Compute nodes for memory encryption Copiar enlaceEnlace copiado en el portapapeles!
To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware.
From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.
Prerequisites
Your deployment must include a Compute node that runs on AMD hardware capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable:
$ lscpu | grep sev
Procedure
- Open your Compute environment file.
Optional: Add the following configuration to your Compute environment file to specify the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently:
parameter_defaults: ComputeAMDSEVExtraConfig: nova::config::nova_config: libvirt/num_memory_encrypted_guests: value: 15NoteThe default value of the
libvirt/num_memory_encrypted_guestsparameter isnone. If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch.Optional: To specify that all x86_64 images use the q35 machine type by default, add the following configuration to your Compute environment file:
parameter_defaults: ComputeAMDSEVParameters: NovaHWMachineType: x86_64=q35If you specify this parameter value, you do not need to set the
hw_machine_typeproperty toq35on every AMD SEV instance image.To ensure that the AMD SEV Compute nodes reserve enough memory for host-level services to function, add 16MB for each potential AMD SEV instance:
parameter_defaults: ComputeAMDSEVParameters: ... NovaReservedHostMemory: <libvirt/num_memory_encrypted_guests * 16>Configure the kernel parameters for the AMD SEV Compute nodes:
parameter_defaults: ComputeAMDSEVParameters: ... KernelArgs: "hugepagesz=1GB hugepages=32 default_hugepagesz=1GB kvm_amd.sev=1"NoteWhen you first add the
KernelArgsparameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to defineKernelArgs.- Save the updates to your Compute environment file.
Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/roles_data_amd_sev.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/<compute_environment_file>.yaml \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ -e /home/stack/templates/node-info.yaml
5.6.4. Creating an image for memory encryption Copiar enlaceEnlace copiado en el portapapeles!
When the overcloud contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption.
From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.
Procedure
Create a new image for memory encryption:
(overcloud)$ openstack image create ... \ --property hw_firmware_type=uefi amd-sev-imageNoteIf you use an existing image, the image must have the
hw_firmware_typeproperty set touefi.Optional: Add the property
hw_mem_encryption=Trueto the image to enable AMD SEV memory encryption on the image:(overcloud)$ openstack image set \ --property hw_mem_encryption=True amd-sev-imageTipYou can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption.
Optional: Set the machine type to
q35, if not already set in the Compute node configuration:(overcloud)$ openstack image set \ --property hw_machine_type=q35 amd-sev-imageOptional: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the image extra specs:
(overcloud)$ openstack image set \ --property trait:HW_CPU_X86_AMD_SEV=required amd-sev-imageTipYou can also specify this trait on the flavor. For more information, see Creating a flavor for memory encryption.
5.6.5. Creating a flavor for memory encryption Copiar enlaceEnlace copiado en el portapapeles!
When the overcloud contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption.
An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image.
Procedure
Create a flavor for memory encryption:
(overcloud)$ openstack flavor create --vcpus 1 --ram 512 --disk 2 \ --property hw:mem_encryption=True m1.small-amd-sevTo schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the flavor extra specs:
(overcloud)$ openstack flavor set \ --property trait:HW_CPU_X86_AMD_SEV=required m1.small-amd-sev
5.6.6. Launching an instance with memory encryption Copiar enlaceEnlace copiado en el portapapeles!
To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance.
Procedure
Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption:
(overcloud)$ openstack server create --flavor m1.small-amd-sev \ --image amd-sev-image amd-sev-instance- Log in to the instance as a cloud user.
To verify that the instance uses memory encryption, enter the following command from the instance:
$ dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active