Este conteúdo não está disponível no idioma selecionado.

Chapter 1. Installing optional RPM packages


When you install MicroShift, you can add optional RPM packages to help manage your deployments. Examples of optional RPMs include those designed to expand your network, add and manage Operators, and manage applications. Use the following procedures to add the packages that you need.

1.1. Installing the GitOps Argo CD manifests from an RPM package

You can use a lightweight version of Red Hat OpenShift GitOps with MicroShift to help manage your applications by installing the microshift-gitops RPM package. You can consistently configure and deploy Kubernetes-based infrastructure and applications across node and development lifecycles by using the declarative GitOps engine. The microshift-gitops RPM package includes the necessary manifests to run core Argo CD.

Important

The Argo CD CLI is not available on MicroShift. This process installs basic GitOps functions.

Prerequisites

  • You installed MicroShift version 4.14 or later.
  • You configured 250MB RAM of additional storage.

Procedure

  1. Enable the GitOps repository with the subscription manager by running the following command:

    $ sudo subscription-manager repos --enable=gitops-1.16-for-rhel-9-$(uname -m)-rpms
    Copy to Clipboard Toggle word wrap
  2. Install the MicroShift GitOps package by running the following command:

    $ sudo dnf install -y microshift-gitops
    Copy to Clipboard Toggle word wrap
  3. To deploy Argo CD pods, restart MicroShift by running the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap

Verification

  • You can verify that your pods are running properly by entering the following command:

    $ oc get pods -n openshift-gitops
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY   STATUS    RESTARTS   AGE
    argocd-application-controller-0       1/1     Running   0          4m11s
    argocd-redis-56844446bc-dzmhf         1/1     Running   0          4m12s
    argocd-repo-server-57b4f896cf-7qk8l   1/1     Running   0          4m12s
    Copy to Clipboard Toggle word wrap

1.2. Installing the multiple networks plugin

You can install the MicroShift Multus Container Network Interface (CNI) plugin alongside a new MicroShift installation. If you want to attach additional networks to a pod for high-performance network configurations, install the microshift-multus RPM package.

Important

The MicroShift Multus CNI plugin manifests are included in the MicroShift binary. To enable multiple networks, you can either set the value in the MicroShift config.yaml file to Enabled, or use the configuration snippet in the microshift-multus RPM. Uninstalling the MicroShift Multus CNI is not supported in either case.

Procedure

  • Install the Multus RPM package by running the following command:

    $ sudo dnf install microshift-multus
    Copy to Clipboard Toggle word wrap
    Tip

    If you create your custom resources (CRs) while you are completing your installation of MicroShift, you can avoid restarting the service to apply them.

Next steps

  • Continue with your new MicroShift installation, including any add-ons.
  • Create the custom resources (CRs) needed for your MicroShift Multus CNI plugin.
  • Configure other networking CNIs as needed.
  • After you have finished installing all of the RPMs that you want to include, start the MicroShift service. The MicroShift Multus CNI plugin is automatically deployed.

1.3. Installing the Operator Lifecycle Manager (OLM) from an RPM package

When you install MicroShift, the Operator Lifecycle Manager (OLM) package is not installed by default. You can install the OLM on your MicroShift instance by using an RPM package. OLM helps you install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running in each MicroShift node.

Procedure

  1. Install the OLM package by running the following command:

    $ sudo dnf install microshift-olm
    Copy to Clipboard Toggle word wrap
  2. To apply the manifest from the package to an active node, run the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap

1.4. Installing and enabling MicroShift Observability

You can install MicroShift Observability at any time, including during the initial MicroShift installation. Observability collects and transmits system data for monitoring and analysis, such as performance and usage metrics and error reporting.

Procedure

  1. Install the microshift-observability RPM by entering the following command:

    $ sudo dnf install microshift-observability
    Copy to Clipboard Toggle word wrap
  2. Enable the microshift-observability system service by entering the following command:

    $ sudo systemctl enable microshift-observability
    Copy to Clipboard Toggle word wrap
  3. Start the microshift-observability system service by entering the following command:

    $ sudo systemctl start microshift-observability
    Copy to Clipboard Toggle word wrap
  4. Restart MicroShift after the initial installation.

    $ sudo systemctl restart microshift-observability
    Copy to Clipboard Toggle word wrap

The installation is successful if there is no output after you start the microshift-observability RPM.

1.5. Installing the Red Hat OpenShift AI RPM

To use AI models in MicroShift deployments, install the Red Hat OpenShift AI (Red Hat OpenShift AI Self-Managed) RPM with a new MicroShift installation. You can also install the RPM on an existing MicroShift instance if you restart the system.

Note

The microshift-ai-model-serving RPM contains manifests that deploy kserve, with the raw deployment mode enabled, and ServingRuntimes objects in the redhat-ods-applications namespace.

Important

Red Hat OpenShift AI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • The system requirements for installing MicroShift have been met.
  • You have root user access to your machine.
  • The OpenShift CLI (oc) is installed.
  • You configured your LVM VG with the capacity needed for the PVs of your workload.
  • You have the RAM and disk space required for your AI model.
  • You configured the required accelerators, hardware, operating system, and MicroShift to provide the resources your model needs.
  • Your AI model is ready to use.

Procedure

  1. Install the MicroShift AI-model-serving RPM package by running the following command:

    $ sudo dnf install microshift-ai-model-serving
    Copy to Clipboard Toggle word wrap
  2. As a root user, restart the MicroShift service by entering the following command:

    $ sudo systemctl restart microshift
    Copy to Clipboard Toggle word wrap
  3. Optional: Install the release information package by running the following command:

    $ sudo dnf install microshift-ai-model-serving-release-info
    Copy to Clipboard Toggle word wrap
    Note

    The microshift-ai-model-serving-release-info RPM contains a JSON file with image references useful for offline procedures or deploying a copy of a ServingRuntime Custom Resource to your namespace during a bootc image build.

Verification

  • Verify that the kserve pod is running in the redhat-ods-applications namespace by entering the following command:

    $ oc get pods -n redhat-ods-applications
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                        READY   STATUS    RESTARTS   AGE
    kserve-controller-manager-7fc9fc688-kttmm   1/1     Running   0          1h
    Copy to Clipboard Toggle word wrap

Next steps

  • Create a namespace for your AI model.
  • Package your model into an OCI image.
  • Configure a model-serving runtime.
  • Verify that your model is ready for inferencing.
  • Make requests against the model server.
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat