Chapter 9. Configuring PCI passthrough


You can use PCI passthrough to attach a physical PCI device, such as a graphics card or a network device, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host.

Important

The Compute service (nova) does not support single networks that span multiple provider networks. When a network contains multiple physical networks, the Compute service only uses the first physical network. Therefore, if you are using routed provider networks you must use the same physical_network name across all the Compute nodes.

If you use routed provider networks with VLAN or flat networks, you must use the same physical_network name for all segments. You then create multiple segments for the network and map the segments to the appropriate subnets.

To enable your cloud users to create instances with PCI devices attached, you must complete the following tasks:

  1. Designate Compute nodes to use for PCI passthrough.
  2. Configure the Compute nodes for PCI passthrough that have the required PCI devices.
  3. Deploy the data plane.
  4. Create a flavor for launching instances with PCI devices attached.

9.1. Prerequisites

  • The Compute nodes have the required PCI devices.
  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.

9.2. PCI passthrough device type field

The Compute service (nova) categorizes PCI devices into one of three types, depending on the capabilities the devices report.

You can set the PCI device device_type field to one of the following values:

type-PF
The device supports SR-IOV and is the parent or root device. Specify this device type to passthrough a device that supports SR-IOV in its entirety.
type-VF
The device is a child device of a device that supports SR-IOV.
type-PCI
The device does not support SR-IOV. This is the default device type if the device_type field is not set.
Note

The device_spec configuration on the Compute nodes and the alias configuration of the Compute services on the control plane must use the same device_type when referring to the same device.

  • Do not use the devname parameter when configuring PCI passthrough, as the device name of a NIC can change. Instead, use vendor_id and product_id because they are more stable, or use the PCI device address of the NIC.
  • Use the address parameter or product_id to pass through a specific Physical Function (PF). If you have multiple PFs of the same product_id, then the Compute service uses any of those devices when an alias with the same product_id is requested in the flavor. The address parameter is always unique.
  • To pass through all the Virtual Functions (VFs), specify only the product_id and vendor_id of the VFs that you want to use for PCI passthrough. You must also specify the address of the VF if you are using SRIOV for NIC partitioning and you are running OVS on a VF.
  • To pass through only the VFs for a PF but not the PF itself, you can use the address parameter to specify the PCI address of the PF and product_id to specify the product ID of the VF.

    Configuring the PCI device address parameter
    The address parameter specifies the PCI address of the device. You can set the value of the address parameter by using either a string or a dict mapping.
    String format

    If you specify the address using a string, you can include wildcards (*) as shown in the following example:

    alias = {"name": "a1", "address": "*:0a:00.*", "physical_network": "physnet1"}
    Dictionary format

    If you specify the address using the dictionary format, you can include regular expression syntax, as shown in the following example:

    [pci]
    device_spec = {"address":{"domain": ".*", "bus": "02", "slot": "01", "function": "[0-2]"}, physical_network: "net1"}

    The Compute service restricts the configuration of address fields to the following maximum values:

    Expand

    Address field

    Maximum value

    domain

    0xFFFF

    bus

    0xFF

    slot

    0x1F

    function

    0x7

The Compute service supports PCI devices with a 16-bit address domain. The Compute service ignores PCI devices with a 32-bit address domain.

You can optionally specify a default NUMA affinity policy for PCI passthrough devices by adding numa_policy to the configuration. For example:

alias = {"name":"a1", "product_id":"1572", "vendor_id": "8086", "device_type": "type-PF", "numa_policy": "preferred"}

You can choose one of four values for the numa_policy.

Expand
Table 9.1. Flavor metadata for PCI NUMA affinity policy
ValueDescription

required

The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance.

preferred

The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If this is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device.

legacy

(Default) The Compute service creates instances that request a PCI device in one of the following cases:

  • The PCI device has affinity with at least one of the NUMA nodes.
  • The PCI devices do not provide information about their NUMA affinities.

socket

The Compute service creates an instance that requests a PCI device only when at least one of the instance NUMA nodes has affinity with a NUMA node in the same host socket as the PCI device. For example, the following host architecture has two sockets, each socket has two NUMA nodes, and a PCI device is connected to one of the nodes in one of the sockets.

+ image::../_images/NUMA_node_socket.png[NUMA node affinity with NUMA node in the same host socket as the PCI device]

The Compute service can pin an instance with two NUMA nodes and the socket PCI NUMA affinity policy only to the following combinations of host nodes because they all have at least one instance NUMA node pinned to the PCI device’s socket:

  • node 0 and node 1
  • node 0 and node 2
  • node 0 and node 3
  • node 1 and node 2
  • node 1 and node 3

The only combination of host nodes that the instance cannot be pinned to is node 2 and node 3, as neither of those nodes are on the same socket as the PCI device. If the other nodes are consumed by other instances and only nodes 2 and 3 are available, the instance does not boot.

To enable your cloud users to create instances with PCI devices attached, start by configuring the control plane. Configure the alias field with the correct product ID, vendor ID, and device type to pass through.

Prerequisites

  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes that you can configure PCI passthrough on. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.
  • The PCIPassthroughFilter and NUMATopologyFilter filters are enabled. These filters are enabled by default. You can verify if they have been changed by checking the OpenStackControlPlane CR:

    oc exec nova-scheduler-0 -- grep "enabled_filters" /etc/nova/nova.conf.d/ -R

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the customServiceConfig field to the nova template to specify the PCI alias for the PCI devices on the Compute nodes:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      nova:
        apiOverride:
          route: {}
        template:
          secret: osp-secret
          apiServiceTemplate:
            replicas: 3
            customServiceConfig: |
              [pci]
              alias = {"name":"a1", "product_id":"<prod_id>", "vendor_id": "<vendor_id>", "device_type": "<device_type>"}
    • Replace <prod_id> with the product ID for the PCI device, for example, 1572.
    • Replace <vendor_id> with the vendor ID for the PCI device, for example, 8086.
    • Replace <device_type> with the type of PCI device, for example, type-PF.

      Note

      You can find the product ID and vendor ID by using the lspci -nn command on a system with the PCI device installed. For more information about configuring the device_type field, see PCI passthrough device type field.

  3. Optional: To set a default NUMA affinity policy for PCI passthrough devices, add numa_policy to the configuration:

              [pci]
              alias = {"name":"a1", "product_id":"<prod_id>", "vendor_id": "<vendor_id>", "device_type": "<device_type>", "numa_policy": "<pci_numa_policy>"}
    • Replace <prod_id> with the product ID for the PCI device, for example, 1572.
    • Replace <vendor_id> with the vendor ID for the PCI device, for example, 8086.
    • Replace <device_type> with the type of PCI device, for example, type-PF.
    • Replace <pci_numa_policy> with a value of required, socket, preferred, or legacy. For more information, see Guidelines for configuring Nova PCI passthrough.
  4. Update the control plane:

    oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    Example output:

    NAME                      STATUS    MESSAGE
    openstack-control-plane   Unknown   Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  6. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of your cells:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

To enable your cloud users to create instances with PCI devices attached, you must create an OpenStackDataPlaneNodeSet custom resource (CR) that groups and configures the Compute nodes that have the PCI devices to use for PCI passthrough.

Note

This procedure applies to new data plane nodes that have not yet been provisioned. To configure, or to reconfigure, PCI devices on a data plane node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the PCI device configuration. For more information, see Scaling data plane nodes in Maintaining the Red hat OpenStack Services on OpenShift deployment.

Warning

You cannot reconfigure a subset of the nodes within a node set. If you need to do this, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

Procedure

  1. Create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the PCI passthrough Compute node, create or update the ConfigMap CR named nova-extra-config and set the value of the [pci] alias parameter:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       32-nova-pci-alias.conf: |
         [pci]
         alias = {"name":"a1", "product_id":"1572", "vendor_id": "8086", "device_type": "type-PF", "numa_policy": "preferred"}

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

    Note

    The Compute node aliases must be identical to the aliases on the Controller node. Therefore, if you added numa_affinity to apiServiceTemplate’s customServiceConfig in the OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, then you must also add it to the PCI alias in nova-extra-config.

  2. Under the alias parameter, set the device_spec parameter to allow nova access to your PCI device:

    alias = {"name":"a1", "product_id":"1572", "vendor_id": "8086", "device_type": "type-PF", "numa_policy": "preferred"}
    device_spec = {"vendor_id":"8086", "product_id":"1572", "address": "0000:06:"}
    Note

    Ensure that you use the vendor ID specific to the GPU type.

  3. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_pci_alias_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-pci-alias
  4. In the compute_pci_alias_deploy.yaml CR, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs that you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes that you want to configure:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-pci-alias
    spec:
       nodeSets:
         - openstack-edpm
         - compute-pci-alias
         - ...
         - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.

      Warning

      If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

      1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
      2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

  5. Save the compute_pci_alias_deploy.yaml deployment file.
  6. Deploy the data plane:

    $ oc create -f compute_pci_alias_deploy.yaml
  7. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME              STATUS MESSAGE
    compute-pci-alias True   Deployed
  8. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list
  9. To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml.
  10. Add the required configuration or modify the existing configuration to my_data_plane_node_set.yaml. Place the configuration under ansibleVars. The following example enables an Intel IOMMU:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
        name: my-data-plane-node-set
    spec:
        …
        nodeTemplate:
            …
            ansible:
                ansibleVars:
    edpm_kernel_args: "default_hugepagesz=1GB hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt tsx=off isolcpus=2-11,14-23 vfio-pci.ids=<pci_device_id> rd.driver.pre=vfio-pci"
    • Replace <pci_device_id> with the PCI device ID for the GPU you are using, for example, 10de:1eb8. Ensure that you use the device ID specific to the GPU.

      Note

      When you first add the KernelArgs parameter to the configuration of a role, the control plane nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each deployment.

  11. Save the OpenStackDataPlaneNodeSet CR definition file.
  12. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f my_data_plane_node_set.yaml -n openstack
  13. Verify that the data plane resource has been updated:

    $ oc get openstackdataplanenodeset
    Sample output:
    NAME                     STATUS MESSAGE
    my-data-plane-node-set   False  Deployment not started
  14. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: my-data-plane-deploy
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set.

  15. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - my-data-plane-node-set
  16. Save the OpenStackDataPlaneDeployment CR deployment file.
  17. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f my_data_plane_deploy.yaml -n openstack
  18. You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -n openstack -w
    $ oc logs -l app=openstackansibleee -n openstack -f \
    --max-log-requests 10
  19. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    Sample output
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     Setup Complete
  20. Repeat the oc get command until you see the NodeSet Ready message:

    $ oc get openstackdataplanenodeset -n openstack
    Sample output:
    NAME                     STATUS   MESSAGE
    my-data-plane-node-set   True     NodeSet Ready

    For more information on the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  21. Create and configure the flavors that your cloud users can use to request the PCI devices. The following example requests two devices, each with a vendor ID of 8086 and a product ID of 1572, using the alias defined in step 7:

    $ openstack --os-compute-api=2.86 flavor set \
     --property "pci_passthrough:alias"="a1:2" device_passthrough
  22. Optional: To override the default NUMA affinity policy for PCI passthrough devices, you can add the NUMA affinity policy property key to the flavor or the image:

    • To override the default NUMA affinity policy by using the flavor, add the hw:pci_numa_affinity_policy property key:

      $ openstack --os-compute-api=2.86 flavor set \
       --property "hw:pci_numa_affinity_policy"="required" \
       Device_passthrough

      For more information about the valid values for hw:pci_numa_affinity_policy, see Flavor metadata.

    • To override the default NUMA affinity policy by using the image, add the hw_pci_numa_affinity_policy property key:

      $ openstack image set \
       --property hw_pci_numa_affinity_policy=required \
       device_passthrough_image
      Note

      If you set the NUMA affinity policy on both the image and the flavor, the property values must match. The flavor setting takes precedence over the image and default settings. Therefore, the configuration of the NUMA affinity policy on the image only takes effect if the property is not set on the flavor.

Verification

To verify that PCI passthrough is working, you must instruct an OpenStack user to create an instance with an attached PCI device, and then log directly into the instance to see that the PCI device is accessible. You can provide the following instructions:

  1. Create an instance with a PCI passthrough device:

    $ openstack server create --flavor device_passthrough \
    --image <image> --wait test-pci
  2. Log in to the instance as a cloud user. For more information, see Connecting to an instance in Creating and managing instances.
  3. To verify that the PCI device is accessible from the instance, enter the following command from the instance:

    $ lspci -nn | grep <device_name>

9.6. Configuring One Time Use devices

The Compute service (nova) supports the marking of devices as One Time Use (OTU) to reserve them for a single use of a single instance.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have selected the OpenStackDataPlaneNodeSet CR that defines which nodes you want to configure as One Time Use PCI devices. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide.
  • You have configured PCI device tracking in the Placement service. For more information, see Enabling PCI device tracking in the Placement service.

Procedure

  1. Create or update the ConfigMap custom resource (CR) named nova-extra-config.yaml.
  2. Add or edit the device_spec of the device you want to tag as an OTU device by adding the one_time_use tag to it.

    The following is an example of device_spec with this tag added:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       32-nova-pci-alias.conf: |
          [pci]
          alias = {"name":"a1", "product_id":"1572", "vendor_id": "8086", "device_type": "type-PF", "numa_policy": "preferred"}
          device_spec = {"vendor_id":"8086", "product_id":"1572", "address": "0000:06:", "one_time_use": true}
    Note

    The device_spec configuration option can be defined multiple times and Red Hat OpenStack Services on OpenShift (RHOSO) merges each of these definitions into a single list of device_spec values. This means a device_spec value cannot be overwritten by subsquent device_spec definitons. When you are configuring a device to be an OTU device, the one_time_use tag must be defined in the configuration file that originally defined the device_spec.

    For example, Creating an OpenStackDataPlaneNodeSet CR for PCI passthrough defines how to enable cloud users to create instances with PCI devices attached. It would typically be at this stage that the tag would be added to the device_spec.

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  3. Save the nova-extra-config.yaml file.
  4. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane. Save the CR to a file named compute_otu_devices_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-otu-devices
  5. In the compute_otu_devices_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes that you want to configure as OTU devices.

    Warning

    You cannot reconfigure a subset of the nodes within a node set. If you need to do this, you must scale the node set down, and create a new node set from the previously removed nodes.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.
    2. If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets and use that custom service in the required node sets instead.
    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
       name: compute-otu-devices
    spec:
       nodeSets:
         - openstack-edpm
         - ...
         - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  6. Save the compute_otu_devices_deploy.yaml deployment file.
  7. Deploy the data plane:

    $ oc create -f compute_otu_devices_deploy.yaml
  8. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME                STATUS MESSAGE
    compute-otu-devices True   Deployed
  9. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list

9.7. Removing One Time Use device reservation

Devices in a One Time Use (OTU) reserved state cannot be allocated to another instance until the reserved state is cleared. Devices reserved as OTU devices have the HW_PCI_ONE_TIME_USE trait. You can use this trait to find and clear the reserved state.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Determine the devices that have the HW_PCI_ONE_TIME_USE trait:

    $ openstack resource provider list --required HW_PCI_ONE_TIME_USE

    The following is an example output for this command:

    $ openstack resource provider list --required HW_PCI_ONE_TIME_USE
    +--------------------------------------+--------------------+------------+--------------------------------------+--------------------------------------+
    | uuid                                 | name               | generation | root_provider_uuid                   | parent_provider_uuid                 |
    +--------------------------------------+--------------------+------------+--------------------------------------+--------------------------------------+
    | b9e67d7d-43db-49c7-8ce8-803cad08e656 | compute-01:00:01.0 |         39 | 2ee402e8-c5c6-4586-9ac7-58e7594d27d1 | 2ee402e8-c5c6-4586-9ac7-58e7594d27d1 |
    +--------------------------------------+--------------------+------------+--------------------------------------+--------------------------------------+
  2. For each device in the list, perform the following tasks:

    1. Confirm that the value of the reserved attribute is 1 and the value of the used attribute is 0:

      $ openstack resource provider inventory list <device_uuid>
      • Replace <device_uuid> with the UUID of the device.

        The following is an example output for this command:

        $ openstack resource provider inventory list b9e67d7d-43db-49c7-8ce8-803cad08e656
        +----------------------+------------------+----------+----------+----------+-----------+-------+------+
        | resource_class       | allocation_ratio | min_unit | max_unit | reserved | step_size | total | used |
        +----------------------+------------------+----------+----------+----------+-----------+-------+------+
        | CUSTOM_PCI_1B36_0100 |              1.0 |        1 |        1 |        1 |         1 |     1 |    0 |
        +----------------------+------------------+----------+----------+----------+-----------+-------+------+
        Important

        Do not clear the reserved state of the device if the value of the used attribute is not 0.

    2. Set the value of the reserved attribute to 0:

      $ openstack resource provider inventory set --amend \
          --resource <device_resource_class>:reserved=0 \
          <device_uuid>
      • Replace <device_resource_class> with the resource_class of the device.
      • Replace <device_uuid> with the UUID of the device.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top