Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 1. Overview of accelerators


If you work with large data sets, you can use accelerators to optimize the performance of your data science models in OpenShift AI. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in OpenShift AI to assist your data scientists in the following tasks:

  • Natural language processing (NLP)
  • Inference
  • Training deep neural networks
  • Data cleansing and data processing

You can use the following accelerators with OpenShift AI:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in OpenShift AI.
    • To enable NVIDIA GPUs on OpenShift, you must install the NVIDIA GPU Operator.
  • AMD graphics processing units (GPUs)

    • Use the AMD GPU Operator to enable AMD GPUs for workloads such as AI/ML training and inference.
    • To enable AMD GPUs on OpenShift, you must do the following tasks:

    • Once installed, the AMD GPU Operator allows you to use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs.
  • Intel Gaudi AI accelerators

    • Intel provides hardware accelerators intended for deep learning workloads.
    • Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment.
    • A workbench image for Intel Gaudi accelerators is not included in OpenShift AI by default. Instead, you must create and configure a custom workbench to enable Intel Gaudi AI support.
    • You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance.
  • IBM Spyre accelerator

    • The IBM Spyre Operator integrates Spyre accelerators directly into OpenShift AI workflows. Before you enable IBM Spyre accelerators in OpenShift AI, you must install the necessary dependencies.
    • Ensure you have IBM Spyre accelerators present on one or more cluster nodes.
    • Install the IBM Spyre Operator. For more information, see IBM Spyre Operator catalog entry.
    • For more information on configuring IBM Spyre accelerators for production-ready deployments, contact IBM support.
  • Before you can use an accelerator in OpenShift AI, you must enable GPU support in OpenShift AI. This includes installing the Node Feature Discovery Operator and the corresponding GPU Operator. For more information, see Installing the Node Feature Discovery Operator.

In addition, your OpenShift instance must contain an associated hardware profile or accelerator profile. For accelerators that are new to your deployment, you must configure a hardware profile or accelerator profile for the accelerator in context. You can create a hardware profile from the Settings Hardware profiles page on the OpenShift AI dashboard. If your deployment contains existing accelerators that had associated profiles already configured, the profiles are automatically created after you upgrade to the latest version of OpenShift AI.

Important

By default, hardware profiles are hidden in the dashboard navigation menu and user interface, while accelerator profiles remain visible. In addition, user interface components associated with the deprecated accelerator profiles functionality are still displayed. To show the Settings Hardware profiles option in the dashboard navigation menu, and the user interface components associated with hardware profiles, set the disableHardwareProfiles value to false in the OdhDashboardConfig custom resource (CR) in OpenShift. For more information about setting dashboard configuration options, see Customizing the dashboard.

Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat