Chapter 1. Overview of accelerators


If you work with large data sets, you can use accelerators to optimize the performance of your data science models in OpenShift AI. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in OpenShift AI to assist your data scientists in the following tasks:

  • Natural language processing (NLP)
  • Inference
  • Training deep neural networks
  • Data cleansing and data processing

You can use the following accelerators with OpenShift AI:

  • NVIDIA graphics processing units (GPUs)

    • To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in OpenShift AI.
    • To enable NVIDIA GPUs on OpenShift, you must install the NVIDIA GPU Operator.
  • AMD graphics processing units (GPUs)

    • Use the AMD GPU Operator to enable AMD GPUs for workloads such as AI/ML training and inference.
    • To enable AMD GPUs on OpenShift, you must do the following tasks:

    • Once installed, the AMD GPU Operator allows you to use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs.
  • Intel Gaudi AI accelerators

    • Intel provides hardware accelerators intended for deep learning workloads.
    • Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment.
    • A workbench image for Intel Gaudi accelerators is not included in OpenShift AI by default. Instead, you must create and configure a custom workbench to enable Intel Gaudi AI support.
    • You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance.
  • Before you can use an accelerator in OpenShift AI, you must enable GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs. In addition, your OpenShift instance must contain an associated hardware profile. For accelerators that are new to your deployment, you must configure a hardware profile for the accelerator in context. You can create a hardware profile from the Settings Hardware profiles page on the OpenShift AI dashboard. If your deployment contains existing accelerators that had associated hardware profiles already configured, a hardware profile is automatically created after you upgrade to the latest version of OpenShift AI.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat