Este contenido no está disponible en el idioma seleccionado.

Chapter 1. Overview of accelerators


If you work with large data sets, you can use accelerators to optimize the performance of your data science models in OpenShift AI. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in OpenShift AI to assist your data scientists in the following tasks:

  • Natural language processing (NLP)
  • Inference
  • Training deep neural networks
  • Data cleansing and data processing

You can use the following accelerators with OpenShift AI:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in OpenShift AI.
    • To enable NVIDIA GPUs on OpenShift, you must install the NVIDIA GPU Operator.
  • AMD graphics processing units (GPUs)

    • Use the AMD GPU Operator to enable AMD GPUs for workloads such as AI/ML training and inference.
    • To enable AMD GPUs on OpenShift, you must do the following tasks:

    • Once installed, the AMD GPU Operator allows you to use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs.
  • Intel Gaudi AI accelerators

    • Intel provides hardware accelerators intended for deep learning workloads.
    • Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment.
    • A workbench image for Intel Gaudi accelerators is not included in OpenShift AI by default. Instead, you must create and configure a custom notebook to enable Intel Gaudi AI support.
    • You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance.
  • Before you can use an accelerator in OpenShift AI, you must enable GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs. In addition, your OpenShift instance must contain an associated hardware profile. For accelerators that are new to your deployment, you must configure a hardware profile for the accelerator in context. You can create a hardware profile from the Settings Hardware profiles page on the OpenShift AI dashboard. If your deployment contains existing accelerators that had associated hardware profiles already configured, a hardware profile is automatically created after you upgrade to the latest version of OpenShift AI.
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat