Este conteúdo não está disponível no idioma selecionado.

Chapter 11. Configuring NVDIMM Compute nodes to provide persistent memory for instances


A non-volatile dual in-line memory module (NVDIMM) is a technology that provides DRAM with persistent memory (PMEM). Standard computer memory loses its data after a loss of electrical power. The NVDIMM maintains its data even after a loss of electrical power. Instances that use PMEM can provide applications with the ability to load large contiguous segments of memory that persist the application data across power cycles. This is useful for high performance computing (HPC), which requests huge amounts of memory.

As a cloud administrator, you can make the PMEM available to instances as virtual PMEM (vPMEM) by creating and configuring PMEM namespaces on Compute nodes that have NVDIMM hardware. Cloud users can then create instances that request vPMEM when they need the instance content to be retained after it is shut down.

To enable your cloud users to create instances that use PMEM, you must complete the following procedures:

  1. Designate Compute nodes for PMEM.
  2. Configure the Compute nodes for PMEM that have the NVDIMM hardware.
  3. Deploy the overcloud.
  4. Create PMEM flavors for launching instances that have vPMEM.
Tip

If the NVDIMM hardware is limited, you can also configure a host aggregate to optimize scheduling on the PMEM Compute nodes. To schedule only instances that request vPMEM on the PMEM Compute nodes, create a host aggregate of the Compute nodes that have the NVDIMM hardware, and configure the Compute scheduler to place only PMEM instances on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates.

Prerequisites

  • Your Compute node has persistent memory hardware, such as Intel® Optane™ DC Persistent Memory.
  • You have configured backend NVDIMM regions on the PMEM hardware device to create PMEM namespaces. You can use the ipmctl tool provided by Intel to configure the PMEM hardware.

Limitations when using PMEM devices

  • You cannot cold migrate, live migrate, resize or suspend and resume instances that use vPMEM.
  • Only instances running RHEL8 can use vPMEM.
  • When rebuilding a vPMEM instance, the persistent memory namespaces are removed to restore the initial state of the instance.
  • When resizing an instance using a new flavor, the content of the original virtual persistent memory will not be copied to the new virtual persistent memory.
  • Virtual persistent memory hotplugging is not supported.
  • When creating a snapshot of a vPMEM instance, the virtual persistent images are not included.

11.1. Designating Compute nodes for PMEM

To designate Compute nodes for PMEM workloads, you must create a new role file to configure the PMEM role, and configure a new overcloud flavor and resource class for PMEM to use to tag the NVDIMM Compute nodes.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file:

    [stack@director ~]$ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  3. Generate a new roles data file named roles_data_pmem.yaml that includes the Controller, Compute, and ComputePMEM roles:

    (undercloud)$ openstack overcloud roles \
     generate -o /home/stack/templates/roles_data_pmem.yaml \
     Compute:ComputePMEM Compute Controller
    Copy to Clipboard Toggle word wrap
  4. Open roles_data_pmem.yaml and edit or add the following parameters and sections:

    Expand
    Section/ParameterCurrent valueNew value

    Role comment

    Role: Compute

    Role: ComputePMEM

    Role name

    name: Compute

    name: ComputePMEM

    description

    Basic Compute Node role

    PMEM Compute Node role

    HostnameFormatDefault

    %stackname%-novacompute-%index%

    %stackname%-novacomputepmem-%index%

    deprecated_nic_config_name

    compute.yaml

    compute-pmem.yaml

  5. Register the NVDIMM Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml. For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide.
  6. Inspect the node hardware:

    (undercloud)$ openstack overcloud node introspect --all-manageable --provide
    Copy to Clipboard Toggle word wrap

    For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide.

  7. Create the compute-pmem overcloud flavor for PMEM Compute nodes:

    (undercloud)$ openstack flavor create --id auto \
     --ram <ram_size_mb> --disk <disk_size_gb> \
     --vcpus <no_vcpus> compute-pmem
    Copy to Clipboard Toggle word wrap
    • Replace <ram_size_mb> with the RAM of the bare metal node, in MB.
    • Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB.
    • Replace <no_vcpus> with the number of CPUs on the bare metal node.

      Note

      These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.

  8. Retrieve a list of your nodes to identify their UUIDs:

    (undercloud)$ openstack baremetal node list
    Copy to Clipboard Toggle word wrap
  9. Tag each bare metal node that you want to designate for PMEM workloads with a custom PMEM resource class:

    (undercloud)$ openstack baremetal node set \
     --resource-class baremetal.PMEM <node>
    Copy to Clipboard Toggle word wrap

    Replace <node> with the ID of the bare metal node.

  10. Associate the compute-pmem flavor with the custom PMEM resource class:

    (undercloud)$ openstack flavor set \
     --property resources:CUSTOM_BAREMETAL_PMEM=1 \
      compute-pmem
    Copy to Clipboard Toggle word wrap

    To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_.

    Note

    A flavor can request only one instance of a bare metal resource class.

  11. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances:

    (undercloud)$ openstack flavor set \
     --property resources:VCPU=0 --property resources:MEMORY_MB=0 \
     --property resources:DISK_GB=0 compute-pmem
    Copy to Clipboard Toggle word wrap
  12. Add the following parameters to the node-info.yaml file to specify the number of PMEM Compute nodes, and the flavor to use for the PMEM-designated Compute nodes:

    parameter_defaults:
      OvercloudComputePMEMFlavor: compute-pmem
      ComputePMEMCount: 3 #set to the no of NVDIMM devices you have
    Copy to Clipboard Toggle word wrap
  13. To verify that the role was created, enter the following command:

    (undercloud)$ openstack overcloud profiles list
    Copy to Clipboard Toggle word wrap

11.2. Configuring a PMEM Compute node

To enable your cloud users to create instances that use vPMEM, you must configure the Compute nodes that have the NVDIMM hardware.

Procedure

  1. Create a new Compute environment file for configuring NVDIMM Compute nodes, for example, env_pmem.yaml.
  2. To partition the NVDIMM regions into PMEM namespaces that the instances can use, add the NovaPMEMNamespaces role-specific parameter to the PMEM role in your Compute environment file, and set the value using the following format:

    <size>:<namespace_name>[,<size>:<namespace_name>]
    Copy to Clipboard Toggle word wrap

    Use the following suffixes to represent the size:

    • "k" or "K" for KiB
    • "m" or "M" for MiB
    • "G" or "G" for GiB
    • "t" or "T" for TiB

      For example, the following configuration creates four namespaces, three of size 6 GiB, and one of size 100 GiB:

      parameter_defaults:
        ComputePMEMParameters:
          NovaPMEMNamespaces: "6G:ns0,6G:ns1,6G:ns2,100G:ns3"
      Copy to Clipboard Toggle word wrap
  3. To map the PMEM namespaces to labels that can be used in a flavor, add the NovaPMEMMappings role-specific parameter to the PMEM role in your Compute environment file, and set the value using the following format:

    <label>:<namespace_name>[|<namespace_name>][,<label>:<namespace_name>[|<namespace_name>]].
    Copy to Clipboard Toggle word wrap

    For example, the following configuration maps the three 6 GiB namespaces to the label "6GB", and the 100 GiB namespace to the label "LARGE":

    parameter_defaults:
      ComputePMEMParameters:
        NovaPMEMNamespaces: "6G:ns0,6G:ns1,6G:ns2,100G:ns3"
        NovaPMEMMappings: "6GB:ns0|ns1|ns2,LARGE:ns3"
    Copy to Clipboard Toggle word wrap
  4. Save the updates to your Compute environment file.
  5. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
     -r /home/stack/templates/roles_data_pmem.yaml \
     -e /home/stack/templates/node-info.yaml \
     -e [your environment files] \
     -e /home/stack/templates/env_pmem.yaml
    Copy to Clipboard Toggle word wrap
  6. Create and configure the flavors that your cloud users can use to launch instances that have vPMEM. The following example creates a flavor that requests a small PMEM device, 6GB, as mapped in Step 3:

    (overcloud)$ openstack flavor create --vcpus 1 --ram 512 --disk 2  \
     --property hw:pmem='6GB' small_pmem_flavor
    Copy to Clipboard Toggle word wrap

Verification

  1. Create an instance using one of the PMEM flavors:

    (overcloud)$ openstack flavor list
    (overcloud)$ openstack server create --flavor small_pmem_flavor \
     --image rhel8 pmem_instance
    Copy to Clipboard Toggle word wrap
  2. Log in to the instance as a cloud user. For more information, see Connecting to an instance.
  3. List all the disk devices attached to the instance:

    $ sudo fdisk -l /dev/pmem0
    Copy to Clipboard Toggle word wrap

    The instance has vPMEM if one of the devices listed is NVDIMM.

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat