Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 5. Configuring memory on Compute nodes


As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).

Use the following features to tune your instances:

  • Overallocation: Tune the virtual RAM to physical RAM allocation ratio.
  • Swap: Tune the allocated swap size to handle memory overcommit.
  • Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages).
  • File-backed memory: Use to expand your Compute node memory capacity.
  • SEV: Use to enable your cloud users to create instances that use memory encryption.

5.1. Configuring memory for overallocation

When implementing memory overcommit, where ram_allocation_ratio >= 1.0, you must deploy the system with sufficient available swap space to support the configured allocation ratio.

Note

If your ram_allocation_ratio parameter is set to < 1, follow the RHEL guidance for swap size. For more information, see Recommended system swap space in RHEL Managing Storage Devices.

Prerequisites

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required configuration or modify the existing configuration under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
        ansible:
          ansibleVars:
             edpm_bootstrap_swap_size_megabytes: 1024
             edpm_bootstrap_swap_path: /swap
             edpm_bootstrap_swap_partition_enabled: false
             edpm_bootstrap_swap_partition_label: swap1
             ...
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml -n openstack
  5. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset

    Sample output:

    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack
  10. You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example:

    $ oc get openstackdataplanedeployment -n openstack

    Sample output:

    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  12. Repeat the oc get command until you see the NodeSet Ready message:

    Example:

    $ oc get openstackdataplanenodeset -n openstack

    Sample output:

    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

5.2. Calculating reserved host memory on Compute nodes

To determine the total amount of RAM to reserve for host processes, allocate enough memory for the resources that run on the host, for example, Ceph Object Storage Daemon (OSD) consumes 3 GB of memory, the emulator overhead required to host instances, and the hypervisor for each instance.

You can use the following formula to calculate the amount of memory to reserve for host processes on each node:

reserved_host_memory_mb = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourceN * resource_ram))
  • Replace vm_no with the number of instances.
  • Replace avg_instance_size with the average amount of memory each instance can use.
  • Replace overhead with the hypervisor overhead required for each instance.
  • Replace resource1 and all resources up to <resourceN> with the number of a resource type on the node.
  • Replace resource_ram with the amount of RAM each resource of this type requires.
Note

If this host will run workloads with a guest NUMA topology, for example, instances with CPU pinning, huge pages, or an explicit NUMA topology specified in the flavor, you must use the reserved_huge_pages configuration option to reserve the memory per NUMA node as 4096 pages.

For information about how to calculate the reserved_host_memory_mb value, see Calculating reserved host memory on Compute nodes.

5.3. Calculating swap size

Calculate the appropriate swap size required for your node to handle memory overcommit effectively.

Use the following formulas to calculate the swap size your node requires:

  • overcommit_ratio = ram_allocation_ratio - 1
  • Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap
  • Maximum swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap)

The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services.

For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and ram_allocation_ratio set to 1:

  • Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB

For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide.

5.4. Configuring huge pages on Compute nodes

As a cloud administrator, you can configure Compute nodes to enable instances to request and use huge pages.

Note

Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes that can enable instances to request huge pages. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [default] and [libvirt]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       28-nova-huge-pages.conf: |
          [default]
          reserved_huge_pages = node:0,size:2048,count:64
          reserved_huge_pages = node:1,size:1GB,count:1
          [libvirt]
          cpu_mode = custom
          cpu_models = Haswell-noTSX
          cpu_model_extra_flags = vmx, pdpe1gb, +pcid
    • 28-nova-huge-pages.conf is the name of the new Compute configuration file. The nova-operator generates the default configuration file with the name 01-nova.conf. Do not use the default name because it will override the infrastructure configuration, such as the transport_url. The nova-compute service applies every file under /etc/nova/nova.conf.d/ in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file.

      Note
      • Do not configure CPU feature flags to allow instances to only request 2 MB huge pages.
      • You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation.
      • You only need to set cpu_model_extra_flags to pdpe1gb when cpu_mode is set to host-model or custom.
      • If the host supports pdpe1gb, and host-passthrough is used as the cpu_mode, then you do not need to set pdpe1gb as a cpu_model_extra_flags.

        Note

        The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws.

      For more information about creating ConfigMap objects, see Creating and using config maps.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_huge_pages_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: openstack-edpm-huge-pages
  3. In the compute_huge_pages_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for huge pages.

    Warning

    You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set. To check if a node set uses the nova-extra-config.yaml ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Navigate to the services list of the node set and find the name of the DataPlaneService` that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: openstack-edpm-huge-pages
    spec:
       nodeSets:
        - openstack-edpm
        - compute-huge-pages
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  4. Save the compute_huge_pages_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_huge_pages_deploy.yaml
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME           STATUS MESSAGE
    compute-huge-pages True   Deployed
  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list

5.4.1. Creating a huge pages flavor for instances

To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances.

Note

To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:

  • Use the --os-cloud option with each command:

    $ openstack flavor list --os-cloud <cloud_name>

    Use this option if you access more than one cloud.

  • Create an environment variable for the cloud name in your bashrc file:

    `export OS_CLOUD=<cloud_name>`

Prerequisites

Procedure

  1. Create a flavor for instances that require huge pages:

    $ openstack flavor create --ram <size_mb> --disk <size_gb> \
     --vcpus <num_reserved_vcpus> huge_pages
  2. To request huge pages, set the hw:mem_page_size property of the flavor to the required size:

    $ openstack --os-compute-api=2.86 flavor set huge_pages --property hw:mem_page_size=<page_size>
    • Replace <page_size> with one of the following valid values:

      • large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems.
      • small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages).
      • any: Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver.
      • <pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
  3. To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance:

    $ openstack server create --flavor huge_pages \
     --image <image> huge_pages_instance

    The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error.

5.4.2. Mounting multiple huge page folders during first boot

You can configure the Compute service (nova) to support multiple huge page sizes during the first boot process.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required configuration or modify the existing configuration in the edpm_default_mounts template under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: my-data-plane-node-set
    spec:
      ...
      nodeTemplate:
        ...
       ansible:
          ansibleVars:
            edpm_default_mounts:
                    - name: hugepages1G
                      path: /dev/hugepages1G
                      opts: pagesize=1G
                      fstype: hugetlbfs
                      group: hugetlbfs
                    - name: hugepages2M
                      path: /dev/hugepages2M
                      opts: pagesize=2M
                      fstype: hugetlbfs
                      group: hugetlbfs
             ...
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml
  5. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset

    Sample output:

    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  6. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  7. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  8. Save the OpenStackDataPlaneDeployment CR deployment file.
  9. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack
  10. To view the Ansible logs while the deployment executes, enter the following command:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed: Example:

    $ oc get openstackdataplanedeployment -n openstack

    Sample output:

    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  12. Repeat the oc get command until you see the NodeSet Ready message:

    Example:

    $ oc get openstackdataplanenodeset -n openstack

    Sample output:

    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

5.5. Configuring Compute nodes to use file-backed memory for instances

You can use file-backed memory to expand your Compute node memory capacity by allocating files in the libvirt memory backing directory as instance memory. You can configure the amount of the host disk that is available for instance memory, and the location on the disk of the instance memory files.

The Compute (nova) service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory.

To use file-backed memory for instances, you must enable file-backed memory on the Compute node.

There are limitations to using file-backed memory for instances:

  • You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled.
  • File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled.
  • File-backed memory is not compatible with memory overcommit.
  • You cannot reserve memory for host processes using reserved_host_memory_mb. When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory.

Prerequisites

  • ram_allocation_ratio must be set to "1.0" on the node and any host aggregate the node is added to.
  • reserved_host_memory_mb must be set to "0".
  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines which nodes use file-backed memory for instances. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [libvirt]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       30-nova-file-backed-memory.conf: |
          [libvirt]
          file_backed_memory = 1048576

    For more information about creating ConfigMap objects, see Creating and using config maps.

  2. Optional: To configure the directory to store the memory backing files, set the memory_backing_dir parameter. The default memory backing directory is /var/lib/libvirt/qemu/ram/:

    [libvirt]
    file_backed_memory = 1048576
    memory_backing_dir = <new_directory_location>
    • Replace <new_directory_location> with the location of the memory backing directory.

      Note

      You must locate your backing store in a directory at or above the default directory location, /var/lib/libvirt/qemu/ram/. You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk.

  3. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_file_backed_memory_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-file-backed-memory
  4. In the compute_file_backed_memory_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for file-backed memory.

    Warning

    You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-file-backed-memory
    spec:
       nodeSets:
         - openstack-edpm
         - compute-file-backed-memory
         - ...
         - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  5. Save the compute_file_backed_memory_deploy.yaml deployment file.
  6. Deploy the data plane:

    $ oc create -f compute_file_backed_memory_deploy.yaml
  7. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME                       STATUS MESSAGE
    compute-file-backed-memory True   Deployed
  8. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list

5.5.1. Changing the memory backing directory host disk

You can move the memory backing directory from the default primary disk location to an alternative disk.

Procedure

  1. Create a file system on the alternative backing device. For example, enter the following command to create an ext4 filesystem on /dev/sdb:

    # mkfs.ext4 /dev/sdb
  2. Mount the backing device. For example, enter the following command to mount /dev/sdb on the default libvirt memory backing directory:

    # mount /dev/sdb /var/lib/libvirt/qemu/ram
    Note

    The mount point must match the value of the memory_backing_dir parameter.

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Secure Encrypted Virtualization (SEV) hardware, provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key.

As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled.

This feature is available to use from the 2nd Gen AMD EPYC™ 7002 Series ("Rome").

To enable your cloud users to create instances that use memory encryption, you must perform the following tasks:

  1. Designate the AMD SEV Compute nodes for memory encryption.
  2. Configure the Compute nodes for memory encryption.
  3. Deploy the data plane.
  4. Create a flavor or image for launching instances with memory encryption.
Tip

If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate.

For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates.

5.6.1. Secure Encrypted Virtualization (SEV)

Secure Encrypted Virtualization (SEV), provided by AMD, protects data by encrypting a running instance’s memory by using a unique key.

SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised.

For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation.

There are limitations of instances with memory encryption:

  • You cannot live migrate, or suspend and resume instances with memory encryption.
  • You cannot use PCI passthrough to directly access devices on instances with memory encryption.
  • You cannot use virtio-blk as the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0).

    Note

    You can use virtio-scsi or SATA as the boot disk, or virtio-blk for non-boot disks.

  • The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8.
  • Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYC™ 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYC™ 7002 Series ("Rome") the limit is 255.
  • Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption.
  • You cannot use memory encryption with instances that have multiple NUMA nodes.

5.6.2. Designating AMD SEV Compute nodes for memory encryption

To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new node set to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption.

Note

The following procedure applies to new data plane nodes that have not yet been provisioned. To assign a resource class to an existing node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling data plane nodes.

Procedure

5.6.3. Configuring AMD SEV Compute nodes for memory encryption

To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware.

Warning

You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes you want to configure CPU pinning on. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.
  • Your deployment must include a Compute node that runs on AMD hardware that is capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable:

    $ lscpu | grep sev

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [libvirt]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       30-nova-amd-sev.conf: |
         [libvirt]
         num_memory_encrypted_guests = 15
    Note

    The default value of the libvirt/num_memory_encrypted_guests parameter is none. If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch.

    Note

    The Q35 machine type is the default machine type and is required for SEV.

  2. To configure the kernel parameters for the AMD SEV Compute nodes, open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  3. Add the required network configuration or modify the existing configuration in edpm_kernel_args under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
       name: my-data-plane-node-set
    spec:
       ...
       nodeTemplate:
         ...
         ansible:
           ansibleVars:
           edpm_kernel_args: "kvm_amd.sev=1"
  4. Optional: To encrypt host memory, you can add mem_encrypt=on to edpm_kernel_args:

    Warning

    Ensure that your device driver supports memory encryption.

          edpm_kernel_args: "kvm_amd.sev=1 mem_encrypt=on"
  5. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_amd_sev_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: openstack-edpm-amd-sev
  6. In the compute_amd_sev_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for memory encryption.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the NodeSets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config.yaml ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: openstack-edpm-amd-sev
    spec:
       nodeSets:
         - openstack-edpm
         - compute-amd-sev
         - my-data-plane-node-set
         - ...
         - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  7. Save the compute_amd_sev_deploy.yaml deployment file.
  8. Deploy the data plane:

    $ oc create -f compute_amd_sev_deploy.yaml
  9. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    NAME           STATUS MESSAGE
    openstack-edpm True   Deployed
  10. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list

5.6.4. Creating an image for memory encryption

When the data plane contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption.

Note

To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:

  • Use the --os-cloud option with each command:

    $ openstack flavor list --os-cloud <cloud_name>

    Use this option if you access more than one cloud.

  • Create an environment variable for the cloud name in your bashrc file:

    `export OS_CLOUD=<cloud_name>`

Prerequisites

  • The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud.
  • You have installed the python-openstackclient package.

Procedure

  1. Create a new image for memory encryption:

     $ openstack image create ...  \
     --property hw_firmware_type=uefi amd-sev-image
    Note

    If you use an existing image, the image must have the hw_firmware_type property set to uefi.

  2. Add the property hw_mem_encryption=True to the image to enable AMD SEV memory encryption on the image:

     $ openstack image set  \
     --property hw_mem_encryption=True amd-sev-image
    Tip

    You can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption.

5.6.5. Creating a flavor for memory encryption

When the data plane contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
Note

An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create a flavor for memory encryption:

    $ openstack flavor create --vcpus 1 --ram 512 --disk 2  \
    --property hw:mem_encryption=True m1.small-amd-sev
  3. Exit the openstackclient pod:

    $ exit

5.6.6. Launching an instance with memory encryption

To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance.

Note

To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods:

  • Use the --os-cloud option with each command:

    $ openstack flavor list --os-cloud <cloud_name>

    Use this option if you access more than one cloud.

  • Create an environment variable for the cloud name in your bashrc file:

    `export OS_CLOUD=<cloud_name>`

Prerequisites

  • The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud.
  • You have installed the python-openstackclient package.

Procedure

  1. Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption:

     $ openstack server create --flavor m1.small-amd-sev \
     --image amd-sev-image amd-sev-instance
  2. Log in to the instance as a cloud user.
  3. To verify that the instance uses memory encryption, enter the following command from the instance:

    $ dmesg | grep -i sev
    AMD Secure Encrypted Virtualization (SEV) active
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben