Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 10. Enabling PCI device tracking in the Placement service


Beyond tracking standard and complex resources like CPUs, RAM, and vGPUs, the Placement service enables the tracking and reservation of PCI devices.

You can reserve PCI devices in the Placement service so that even though they are configured to be available to the Compute service, the Placement service does not make them available for use. Reserving a device makes the device available to an external tool that can perform some maintenance operations or configuration of the device.

Warning

If you enable PCI device tracking and then disable it again, the nova-compute service does not start.

Note

When PCI tracking in the Placement service is enabled, you cannot configure pci.alias with a repeated alias name associated to multiple alias specifications.

Important

This feature is only supported for PCI devices consumed via nova flavors. PCI devices intended to be consumed via neutron ports, and therefore having the physical_network defined in the device spec, are not supported but can be used alongside this feature.

10.1. Prerequisites

10.2. Configuring PCI device tracking for node sets

You must configure PCI device tracking on all node sets that use PCI passthrough.

Warning

You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes for which you want to enable PCI device tracking. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config and set the value of the [pci] report_in_placement parameter:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       37-nova-pci-placement.conf: |
         [pci]
         report_in_placement = true

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_pci_tracking_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: compute_pci_tracking

    For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in Deploying Red Hat OpenStack Services on OpenShift.

  3. In the compute_pci_tracking_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for CPU pinning.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: compute-pci_tracking
    spec:
      nodeSets:
        - openstack-edpm
        - compute-pci-alias
        - compute-pci_tracking
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  4. Save the compute_pci_tracking_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_pci_tracking_deploy.yaml
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    NAME           STATUS MESSAGE
    compute-pci_tracking True   Deployed
  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack resource provider list

10.3. Enabling PCI device tracking on the control plane

To enable PCI device tracking, you must update the service configuration in the OpenStackControlPlane CR file, openstack_control_plane.yaml, and apply your update to the control plane.

Prerequisites

  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following [filter_scheduler] pci_in_placement configuration to the nova service configuration:

     nova:
        template:
          apiServiceTemplate:
            customServiceConfig: |
              [filter_scheduler]
              pci_in_placement = true
          cellTemplates:
            cell0:
              conductorServiceTemplate:
                customServiceConfig: |
                  [filter_scheduler]
                  pci_in_placement = true
            cell1:
              conductorServiceTemplate:
                customServiceConfig: |
                  [filter_scheduler]
                  pci_in_placement = true
          schedulerServiceTemplate:
            customServiceConfig: |
              [filter_scheduler]
              pci_in_placement = true
    Note

    If there are multiple cells, you must apply the [filter_scheduler] pci_in_placement configuration to each cell.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of your cells:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

10.4. Reserving a PCI device in the Placement service

To remove a PCI device from the Compute scheduler service, you must use an openstack command to reserve the device in the Placement service and therefore remove it from the Compute scheduler service. While reserved, the Compute service (nova) cannot use the device for virtual machines (VMs).

The device might need to be removed for various reasons, for example, to fix a device or to perform maintenance. After maintenance, you can do the reverse operation to unreserve the device.

Prerequisites

Procedure

  1. To reserve a specific device, for example, a device with PCI address 0000:09:00.0 on compute1, you must retrieve the UUID of the device by using the device resource provider (RP) in the command. The device RP is a combination of the hostname of the Compute node and the PCI address of the GPU, for example, compute1_0000:09:00.0:

    $ openstack resource provider list --name compute1_0000:09:00.0
    +--------------------------------------+-----------------------+------------+--------------------------------------+--------------------------------------+
    | uuid                                 | name                  | generation | root_provider_uuid                   | parent_provider_uuid                 |
    +--------------------------------------+-----------------------+------------+--------------------------------------+--------------------------------------+
    | d3d0f3d7-8376-487f-8849-e43027c31582 | compute1_0000:09:00.0 |          2 | e909b54b-4cea-49f9-bfcb-17c833db51d1 | e909b54b-4cea-49f9-bfcb-17c833db51d1 |
    +--------------------------------------+-----------------------+------------+--------------------------------------+--------------------------------------+
  2. Use the uuid of the device to check the current inventory of the RP. In this example, the UUID is d3d0f3d7-8376-487f-8849-e43027c31582:

    $ openstack resource provider inventory list d3d0f3d7-8376-487f-8849-e43027c31582
    +----------------------+------------------+----------+----------+----------+-----------+-------+------+
    | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total | used |
    +----------------------+------------------+----------+----------+----------+-----------+-------+------+
    | CUSTOM_PCI_8086_10C9 |              1.0 |        1 |        1 |        0 |         1 |     1 |    0 |
    +----------------------+------------------+----------+----------+----------+-----------+-------+------+
    Note

    When you reserve a device, the Compute scheduler service is prevented from using it in future scheduling. However, the device might still be in use by a pre-existing VM. If the value of the used column in the inventory list output is set to 0, then the device is not in use by a pre-existing VM.

  3. To reserve the device, set the value of reserved to 1:

    openstack resource provider inventory set d3d0f3d7-8376-487f-8849-e43027c31582 --amend --resource CUSTOM_PCI_8086_10C9:reserved=1
    +----------------------+------------------+----------+----------+----------+-----------+-------+
    | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total |
    +----------------------+------------------+----------+----------+----------+-----------+-------+
    | CUSTOM_PCI_8086_10C9 |              1.0 |        1 |        1 |        1 |         1 |     1 |
    +----------------------+------------------+----------+----------+----------+-----------+-------+
  4. To unreserve the device and make it available to the Compute scheduler service again, set the value of reserved to 0:

    openstack resource provider inventory set d3d0f3d7-8376-487f-8849-e43027c31582 --amend --resource CUSTOM_PCI_8086_10C9:reserved=0
    +----------------------+------------------+----------+----------+----------+-----------+-------+
    | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total |
    +----------------------+------------------+----------+----------+----------+-----------+-------+
    | CUSTOM_PCI_8086_10C9 |              1.0 |        1 |        1 |        0 |         1 |     1 |
    +----------------------+------------------+----------+----------+----------+-----------+-------+
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben