Chapter 6. Configuring Compute service storage


You create an instance from a base image, which the Compute (nova) service copies from the Image (glance) service, and caches locally on the Compute nodes. This base image contains the instance disk, which is the back end for the instance.

You can configure the Compute service to store ephemeral instance disk data locally on the host Compute node or remotely on either an NFS share or Ceph cluster. Alternatively, you can configure the Compute service to store instance disk data in persistent storage in a volume provided by the Block Storage (cinder) service.

You can configure image caching for your environment, and configure the performance and security of the instance disks. When the Image service uses Red Hat Ceph RADOS Block Device (RBD) as the back end and the Compute service uses local file-based ephemeral storage, then the Compute service can download images directly from the RBD image repository without using the Image service API.

Additional resources

By default, you can attach an unlimited number of storage devices to a single instance. Attaching a large number of disk devices to an instance can degrade performance on the instance. You can tune the maximum number of devices that can be attached to an instance based on the boundaries of what your environment can support. The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices. You can attach a maximum of 500 disk devices to instances with machine type Q35.

Note

Q35 is the default machine type, which uses PCIe ports. You can manage the number of PCIe port devices by adding the [libvirt] parameter num_pcie_ports to the nova-extra-config ConfigMap CR. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware in Performing storage operations.

Warning
  • Changing the value of the [compute] parameter max_disk_devices_to_attach in the nova-extra-config ConfigMap CR on a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you change max_disk_devices_to_attach to 20, a request to rebuild instance A will fail.
  • During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
Note

The configured maximum number of storage devices is not enforced on shelved offloaded instances, as they have no Compute node.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines which nodes you can configure the maximum number of storage devices to attach to one instance. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config and set the values of the parameters under [compute]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       31-nova-max-storage-devices.conf: |
          [compute]
          max_disk_devices_to_attach = <max_device_limit>
    Copy to Clipboard Toggle word wrap

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane. Save the CR to a file named compute_storage_devices_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-storage-devices
    Copy to Clipboard Toggle word wrap

    For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

  3. In the compute_storage_devices_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes that you want to configure the maximum number of storage devices to attach to one instance.

    Warning

    You cannot reconfigure a subset of the nodes within a node set. If you need to do this, you must scale the node set down, and create a new node set from the previously removed nodes.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets and use that custom service in the required node sets instead.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-storage-devices
    spec:
       nodeSets:
         - openstack-edpm
         - compute-storage-devices
         - ...
         - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the `OpenStackDataPlaneNodeSet`CRs that you want to include in your data plane deployment.
  4. Save the compute_storage_devices_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_storage_devices_deploy.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME                STATUS MESSAGE
    compute-storage-devices True   Deployed
    Copy to Clipboard Toggle word wrap
  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

6.2. Configuring shared instance storage

By default, when you launch an instance, the instance disk is stored as a file in the instance directory, /var/lib/nova/instances. You can configure an NFS storage backend for the Compute service to store these instance files on shared NFS storage. To configure shared instance storage on NFS, you enable NFS on the data plane and set the location of the NFS share.

Prerequisites

  • You are using NFSv4 or later. Red Hat OpenStack Services on OpenShift (RHOSO) does not support earlier versions of NFS. For more information, see the Red Hat Knowledgebase solution RHOS NFSv4-Only Support Notes.
  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  2. Add the required configuration or modify the existing configuration under ansibleVars:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
    name: my-data-plane-node-set
    spec:
    ...
      nodeTemplate:
        ansible:
          ansibleVars:
            edpm_bootstrap_command: |
              dnf -y install conntrack-tools
            edpm_extra_mounts:
            - fstype: nfs4
              opts: context=system_u:obje/var/lib/nova/instances
              path: /var/lib/nova/instances
              src: <nfs_path>
    Copy to Clipboard Toggle word wrap
    • Replace <nfs_path> with the path to the NFS. For example, 192.168.122.1:/home/nfs/nova.
  3. Optional: The default mount SELinux context for NFS storage when NFS backend storage is enabled is context=system_u:object_r:nfs_t:s0. Add the following parameter to amend the mount options for the NFS instance file storage mount point:

    [compute]
    nfs_mount_options = 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>'
    Copy to Clipboard Toggle word wrap
    • Replace <additional_nfs_mount_options> with a comma-separated list of the mount options you want to use for NFS instance file storage. For more information on the available mount options, see the mount man page:

      $ man 8 mount
      Copy to Clipboard Toggle word wrap
  4. Save the OpenStackDataPlaneNodeSet CR definition file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Copy to Clipboard Toggle word wrap

    Sample output:

    NAME STATUS MESSAGE
    my-data-plane-node-set False Deployment not started
    Copy to Clipboard Toggle word wrap
  7. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
    name: my-data-plane-deploy
    Copy to Clipboard Toggle word wrap
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
    nodeSets:
    - my-data-plane-node-set
    Copy to Clipboard Toggle word wrap
  9. Save the OpenStackDataPlaneDeployment CR deployment file.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  11. You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
    Copy to Clipboard Toggle word wrap
  12. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    Example:

    $ oc get openstackdataplanedeployment -n openstack
    Copy to Clipboard Toggle word wrap

    Sample output:

    NAME STATUS MESSAGE
    my-data-plane-node-set True Setup Complete
    Copy to Clipboard Toggle word wrap
  13. Repeat the oc get command until you see the NodeSet Ready message:

    Example

    $ oc get openstackdataplanenodeset -n openstack
    Copy to Clipboard Toggle word wrap

    Sample output:

    NAME STATUS MESSAGE
    my-data-plane-node-set True NodeSet Ready
    Copy to Clipboard Toggle word wrap

    For information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

When the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end, and the Compute service (nova) uses local file-based ephemeral storage, you can configure the Compute service to download images directly from the RBD image repository without using the Image service API. This reduces the time it takes to download an image to the Compute node image cache at instance boot time, which improves instance launch time.

Prerequisites

  • The Image service back end is a Red Hat Ceph RADOS Block Device (RBD).
  • The Compute service is using a local file-based ephemeral store for the image cache and instance disks.
  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines which nodes you want to configure image downloads directly from RBD. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config and set the values of the parameters under [glance] to specify the Image service RBD back end, and the maximum length of time that the Compute service waits to connect to the Image service RBD back end, in seconds:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       33-nova-image-rbd.conf: |
         [glance]
         enable_rbd_download=True
         rbd_user=openstack
         rbd_pool=images
         rbd_ceph_conf=/etc/ceph/ceph.conf
         rbd_connect_timeout=5
    Copy to Clipboard Toggle word wrap

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_image_rbd_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-image-rbd
    Copy to Clipboard Toggle word wrap
  3. In the compute_image_rbd_deploy.yaml CR, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite, which defines the nodes that you want to configure image downloads directly from RBD.

    Warning

    You cannot reconfigure a subset of the nodes within a node set. If you need to do this, you must scale the node set down and create a new node set from the previously removed nodes.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets and use that custom service in the required node sets instead.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-image-rbd
    spec:
       nodeSets:
         - openstack-edpm
         - compute-image-rbd
         - ...
         - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the `OpenStackDataPlaneNodeSet`CRs that you want to include in your data plane deployment.
  4. Save the compute_image_rbd_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_image_rbd_deploy.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME           STATUS MESSAGE
    compute-image-rbd True   Deployed
    Copy to Clipboard Toggle word wrap
  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat