Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Configuring memory on Compute nodes
As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).
Use the following features to tune your instances:
- Overallocation: Tune the virtual RAM to physical RAM allocation ratio.
- Swap: Tune the allocated swap size to handle memory overcommit.
- Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages).
- File-backed memory: Use to expand your Compute node memory capacity.
- SEV: Use to enable your cloud users to create instances that use memory encryption.
5.1. Configuring memory for overallocation Link kopierenLink in die Zwischenablage kopiert!
When implementing memory overcommit, where ram_allocation_ratio >= 1.0, you must deploy the system with sufficient available swap space to support the configured allocation ratio.
If your ram_allocation_ratio parameter is set to < 1, follow the RHEL guidance for swap size. For more information, see Recommended system swap space in RHEL Managing Storage Devices.
Prerequisites
- You have calculated the swap size your node requires. For more information, see Calculating swap size.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required configuration or modify the existing configuration under
ansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_bootstrap_swap_size_megabytes: 1024 edpm_bootstrap_swap_path: /swap edpm_bootstrap_swap_partition_enabled: false edpm_bootstrap_swap_partition_label: swap1 ...-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yaml -n openstackVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodesetSample output:
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not startedCreate a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:Example:
$ oc get openstackdataplanedeployment -n openstackSample output:
NAME STATUS MESSAGE my-data-plane-node-set True Setup CompleteRepeat the
oc getcommand until you see the NodeSet Ready message:Example:
$ oc get openstackdataplanenodeset -n openstackSample output:
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
5.2. Calculating reserved host memory on Compute nodes Link kopierenLink in die Zwischenablage kopiert!
To determine the total amount of RAM to reserve for host processes, allocate enough memory for the resources that run on the host, for example, Ceph Object Storage Daemon (OSD) consumes 3 GB of memory, the emulator overhead required to host instances, and the hypervisor for each instance.
You can use the following formula to calculate the amount of memory to reserve for host processes on each node:
reserved_host_memory_mb = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourceN * resource_ram))
-
Replace
vm_nowith the number of instances. -
Replace
avg_instance_sizewith the average amount of memory each instance can use. -
Replace
overheadwith the hypervisor overhead required for each instance. -
Replace
resource1and all resources up to<resourceN>with the number of a resource type on the node. -
Replace
resource_ramwith the amount of RAM each resource of this type requires.
If this host will run workloads with a guest NUMA topology, for example, instances with CPU pinning, huge pages, or an explicit NUMA topology specified in the flavor, you must use the reserved_huge_pages configuration option to reserve the memory per NUMA node as 4096 pages.
For information about how to calculate the reserved_host_memory_mb value, see Calculating reserved host memory on Compute nodes.
5.3. Calculating swap size Link kopierenLink in die Zwischenablage kopiert!
Calculate the appropriate swap size required for your node to handle memory overcommit effectively.
Use the following formulas to calculate the swap size your node requires:
-
overcommit_ratio =
ram_allocation_ratio- 1 -
Minimum swap size (MB) =
(total_RAM * overcommit_ratio) + RHEL_min_swap -
Maximum swap size (MB) =
total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap)
The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services.
For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and ram_allocation_ratio set to 1:
- Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB
For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide.
5.4. Configuring huge pages on Compute nodes Link kopierenLink in die Zwischenablage kopiert!
As a cloud administrator, you can configure Compute nodes to enable instances to request and use huge pages.
Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.
Prerequisites
-
The
occommand line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with
cluster-adminprivileges. -
You have selected the
OpenStackDataPlaneNodeSetCR that defines the nodes that can enable instances to request huge pages. For more information about creating anOpenStackDataPlaneNodeSetCR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.
Procedure
Create or update the
ConfigMapCR namednova-extra-config.yamland set the values of the parameters under [default] and [libvirt]:apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 28-nova-huge-pages.conf: | [default] reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1 [libvirt] cpu_mode = custom cpu_models = Haswell-noTSX cpu_model_extra_flags = vmx, pdpe1gb, +pcid28-nova-huge-pages.confis the name of the new Compute configuration file. Thenova-operatorgenerates the default configuration file with the name01-nova.conf. Do not use the default name because it will override the infrastructure configuration, such as thetransport_url. Thenova-computeservice applies every file under/etc/nova/nova.conf.d/in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file.Note- Do not configure CPU feature flags to allow instances to only request 2 MB huge pages.
- You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation.
-
You only need to set
cpu_model_extra_flagstopdpe1gbwhencpu_modeis set tohost-modelorcustom. If the host supports
pdpe1gb, andhost-passthroughis used as thecpu_mode, then you do not need to setpdpe1gbas acpu_model_extra_flags.NoteThe
pdpe1gbflag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws.
For more information about creating
ConfigMapobjects, see Creating and using config maps.
Create a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_huge_pages_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-huge-pagesIn the
compute_huge_pages_deploy.yaml, specifynodeSetsto include all theOpenStackDataPlaneNodeSetCRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSetCR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSetCR defines the nodes you want to designate for huge pages.WarningYou can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yamlConfigMapmight directly affect more than one node set. To check if a node set uses thenova-extra-config.yamlConfigMapand therefore will be affected by the reconfiguration, complete the following steps:-
Navigate to the services list of the node set and find the name of the
DataPlaneService`that points to nova. Ensure that the value of the
edpmServiceTypefield of theDataPlaneServiceis set tonova.If the dataSources list of the
DataPlaneServicecontains aconfigMapRefnamednova-extra-config, then this node set uses thisConfigMapand therefore will be affected by the configuration changes in thisConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a newDataPlaneServicepointing to a separateConfigMapfor these node sets.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-huge-pages spec: nodeSets: - openstack-edpm - compute-huge-pages - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Navigate to the services list of the node set and find the name of the
-
Save the
compute_huge_pages_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute_huge_pages_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-huge-pages True DeployedAccess the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
5.4.1. Creating a huge pages flavor for instances Link kopierenLink in die Zwischenablage kopiert!
To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances.
To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:
Use the
--os-cloudoption with each command:$ openstack flavor list --os-cloud <cloud_name>Use this option if you access more than one cloud.
Create an environment variable for the cloud name in your
bashrcfile:`export OS_CLOUD=<cloud_name>`
Prerequisites
- The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes.
Procedure
Create a flavor for instances that require huge pages:
$ openstack flavor create --ram <size_mb> --disk <size_gb> \ --vcpus <num_reserved_vcpus> huge_pagesTo request huge pages, set the
hw:mem_page_sizeproperty of the flavor to the required size:$ openstack --os-compute-api=2.86 flavor set huge_pages --property hw:mem_page_size=<page_size>Replace
<page_size>with one of the following valid values:-
large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. -
small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any: Selects the page size by using thehw_mem_page_sizeset on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. -
<pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
-
To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance:
$ openstack server create --flavor huge_pages \ --image <image> huge_pages_instanceThe Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a
NoValidHosterror.
5.4.2. Mounting multiple huge page folders during first boot Link kopierenLink in die Zwischenablage kopiert!
You can configure the Compute service (nova) to support multiple huge page sizes during the first boot process.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required configuration or modify the existing configuration in the
edpm_default_mountstemplate underansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_default_mounts: - name: hugepages1G path: /dev/hugepages1G opts: pagesize=1G fstype: hugetlbfs group: hugetlbfs - name: hugepages2M path: /dev/hugepages2M opts: pagesize=2M fstype: hugetlbfs group: hugetlbfs ...-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f my_data_plane_node_set.yamlVerify that the data plane resource has been updated:
$ oc get openstackdataplanenodesetSample output:
NAME STATUS MESSAGE my-data-plane-node-set False Deployment not startedCreate a file on your workstation to define the
OpenStackDataPlaneDeploymentCR, for example,my_data_plane_deploy.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deployTipGive the definition file and the
OpenStackDataPlaneDeploymentCR a unique and descriptive name that indicates the purpose of the modified node set.Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - my-data-plane-node-set-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f my_data_plane_deploy.yaml -n openstackTo view the Ansible logs while the deployment executes, enter the following command:
$ oc get pod -l app=openstackansibleee -n openstack -w $ oc logs -l app=openstackansibleee -n openstack -f \ --max-log-requests 10Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed: Example:$ oc get openstackdataplanedeployment -n openstackSample output:
NAME STATUS MESSAGE my-data-plane-node-set True Setup CompleteRepeat the
oc getcommand until you see theNodeSet Readymessage:Example:
$ oc get openstackdataplanenodeset -n openstackSample output:
NAME STATUS MESSAGE my-data-plane-node-set True NodeSet ReadyFor information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
5.5. Configuring Compute nodes to use file-backed memory for instances Link kopierenLink in die Zwischenablage kopiert!
You can use file-backed memory to expand your Compute node memory capacity by allocating files in the libvirt memory backing directory as instance memory. You can configure the amount of the host disk that is available for instance memory, and the location on the disk of the instance memory files.
The Compute (nova) service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory.
To use file-backed memory for instances, you must enable file-backed memory on the Compute node.
There are limitations to using file-backed memory for instances:
- You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled.
- File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled.
- File-backed memory is not compatible with memory overcommit.
-
You cannot reserve memory for host processes using
reserved_host_memory_mb. When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory.
Prerequisites
-
ram_allocation_ratiomust be set to "1.0" on the node and any host aggregate the node is added to. -
reserved_host_memory_mbmust be set to "0". -
The
occommand line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with
cluster-adminprivileges. -
You have selected the
OpenStackDataPlaneNodeSetCR that defines which nodes use file-backed memory for instances. For more information about creating anOpenStackDataPlaneNodeSetCR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.
Procedure
Create or update the
ConfigMapCR namednova-extra-config.yamland set the values of the parameters under [libvirt]:apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 30-nova-file-backed-memory.conf: | [libvirt] file_backed_memory = 1048576For more information about creating
ConfigMapobjects, see Creating and using config maps.Optional: To configure the directory to store the memory backing files, set the
memory_backing_dirparameter. The default memory backing directory is/var/lib/libvirt/qemu/ram/:[libvirt] file_backed_memory = 1048576 memory_backing_dir = <new_directory_location>Replace
<new_directory_location>with the location of the memory backing directory.NoteYou must locate your backing store in a directory at or above the default directory location,
/var/lib/libvirt/qemu/ram/. You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk.
Create a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_file_backed_memory_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-file-backed-memoryIn the
compute_file_backed_memory_deploy.yaml, specifynodeSetsto include all theOpenStackDataPlaneNodeSetCRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSetCR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSetCR defines the nodes you want to designate for file-backed memory.WarningYou can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yamlConfigMapmight directly affect more than one node set, depending on how the node sets and theDataPlaneServicesare configured. To check if a node set uses thenova-extra-configConfigMapand therefore will be affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
DataPlaneServicethat points to nova. Ensure that the value of the
edpmServiceTypefield of theDataPlaneServiceis set tonova.If the
dataSourceslist of theDataPlaneServicecontains aconfigMapRefnamednova-extra-config, then this node set uses thisConfigMapand therefore will be affected by the configuration changes in thisConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a newDataPlaneServicepointing to a separateConfigMapfor these node sets.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-file-backed-memory spec: nodeSets: - openstack-edpm - compute-file-backed-memory - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_file_backed_memory_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute_file_backed_memory_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-file-backed-memory True DeployedAccess the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
5.5.1. Changing the memory backing directory host disk Link kopierenLink in die Zwischenablage kopiert!
You can move the memory backing directory from the default primary disk location to an alternative disk.
Procedure
Create a file system on the alternative backing device. For example, enter the following command to create an
ext4filesystem on/dev/sdb:# mkfs.ext4 /dev/sdbMount the backing device. For example, enter the following command to mount
/dev/sdbon the default libvirt memory backing directory:# mount /dev/sdb /var/lib/libvirt/qemu/ramNoteThe mount point must match the value of the
memory_backing_dirparameter.
5.6. Configuring AMD SEV Compute nodes to provide memory encryption for instances Link kopierenLink in die Zwischenablage kopiert!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Secure Encrypted Virtualization (SEV) hardware, provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key.
As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled.
This feature is available to use from the 2nd Gen AMD EPYC™ 7002 Series ("Rome").
To enable your cloud users to create instances that use memory encryption, you must perform the following tasks:
- Designate the AMD SEV Compute nodes for memory encryption.
- Configure the Compute nodes for memory encryption.
- Deploy the data plane.
- Create a flavor or image for launching instances with memory encryption.
If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate.
For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates.
5.6.1. Secure Encrypted Virtualization (SEV) Link kopierenLink in die Zwischenablage kopiert!
Secure Encrypted Virtualization (SEV), provided by AMD, protects data by encrypting a running instance’s memory by using a unique key.
SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised.
For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation.
There are limitations of instances with memory encryption:
- You cannot live migrate, or suspend and resume instances with memory encryption.
- You cannot use PCI passthrough to directly access devices on instances with memory encryption.
You cannot use
virtio-blkas the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0).NoteYou can use
virtio-scsiorSATAas the boot disk, orvirtio-blkfor non-boot disks.- The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8.
- Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYC™ 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYC™ 7002 Series ("Rome") the limit is 255.
- Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption.
- You cannot use memory encryption with instances that have multiple NUMA nodes.
5.6.2. Designating AMD SEV Compute nodes for memory encryption Link kopierenLink in die Zwischenablage kopiert!
To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new node set to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption.
The following procedure applies to new data plane nodes that have not yet been provisioned. To assign a resource class to an existing node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling data plane nodes.
Procedure
- See Configuring a node set for a feature or workload in Customizing the Red Hat OpenStack Services on OpenShift deployment.
5.6.3. Configuring AMD SEV Compute nodes for memory encryption Link kopierenLink in die Zwischenablage kopiert!
To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware.
You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
Prerequisites
-
The
occommand line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with
cluster-adminprivileges. -
You have selected the
OpenStackDataPlaneNodeSetCR that defines the nodes you want to configure CPU pinning on. For more information about creating anOpenStackDataPlaneNodeSetCR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide. Your deployment must include a Compute node that runs on AMD hardware that is capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable:
$ lscpu | grep sev
Procedure
Create or update the ConfigMap CR named
nova-extra-config.yamland set the values of the parameters under[libvirt]:apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 30-nova-amd-sev.conf: | [libvirt] num_memory_encrypted_guests = 15NoteThe default value of the
libvirt/num_memory_encrypted_guestsparameter is none. If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch.NoteThe Q35 machine type is the default machine type and is required for SEV.
-
To configure the kernel parameters for the AMD SEV Compute nodes, open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,my_data_plane_node_set.yaml. Add the required network configuration or modify the existing configuration in
edpm_kernel_argsunderansibleVars:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_kernel_args: "kvm_amd.sev=1"Optional: To encrypt host memory, you can add
mem_encrypt=ontoedpm_kernel_args:WarningEnsure that your device driver supports memory encryption.
edpm_kernel_args: "kvm_amd.sev=1 mem_encrypt=on"Create a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_amd_sev_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-amd-sevIn the
compute_amd_sev_deploy.yaml, specifynodeSetsto include all theOpenStackDataPlaneNodeSetCRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSetCR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSetCR defines the nodes you want to designate for memory encryption.WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yamlConfigMap might directly affect more than one node set, depending on how the NodeSets and the DataPlaneServices are configured. To check if a node set uses thenova-extra-config.yamlConfigMap and therefore will be affected by the reconfiguration, complete the following steps:- Check the services list of the node set and find the name of the DataPlaneService that points to nova.
Ensure that the value of the
edpmServiceTypefield of the DataPlaneService is set tonova.If the dataSources list of the DataPlaneService contains a configMapRef named
nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-amd-sev spec: nodeSets: - openstack-edpm - compute-amd-sev - my-data-plane-node-set - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
-
Save the
compute_amd_sev_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute_amd_sev_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm True DeployedAccess the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
5.6.4. Creating an image for memory encryption Link kopierenLink in die Zwischenablage kopiert!
When the data plane contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption.
To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:
Use the
--os-cloudoption with each command:$ openstack flavor list --os-cloud <cloud_name>Use this option if you access more than one cloud.
Create an environment variable for the cloud name in your
bashrcfile:`export OS_CLOUD=<cloud_name>`
Prerequisites
-
The administrator has created a project for you and they have provided you with a
clouds.yamlfile for you to access the cloud. -
You have installed the
python-openstackclientpackage.
Procedure
Create a new image for memory encryption:
$ openstack image create ... \ --property hw_firmware_type=uefi amd-sev-imageNoteIf you use an existing image, the image must have the
hw_firmware_typeproperty set touefi.Add the property
hw_mem_encryption=Trueto the image to enable AMD SEV memory encryption on the image:$ openstack image set \ --property hw_mem_encryption=True amd-sev-imageTipYou can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption.
5.6.5. Creating a flavor for memory encryption Link kopierenLink in die Zwischenablage kopiert!
When the data plane contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientCreate a flavor for memory encryption:
$ openstack flavor create --vcpus 1 --ram 512 --disk 2 \ --property hw:mem_encryption=True m1.small-amd-sevExit the openstackclient pod:
$ exit
5.6.6. Launching an instance with memory encryption Link kopierenLink in die Zwischenablage kopiert!
To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance.
To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:
Use the
--os-cloudoption with each command:$ openstack flavor list --os-cloud <cloud_name>Use this option if you access more than one cloud.
Create an environment variable for the cloud name in your
bashrcfile:`export OS_CLOUD=<cloud_name>`
Prerequisites
-
The administrator has created a project for you and they have provided you with a
clouds.yamlfile for you to access the cloud. -
You have installed the
python-openstackclientpackage.
Procedure
Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption:
$ openstack server create --flavor m1.small-amd-sev \ --image amd-sev-image amd-sev-instance- Log in to the instance as a cloud user.
To verify that the instance uses memory encryption, enter the following command from the instance:
$ dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active