이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 1. Installing optional RPM packages
You can install optional RPM packages with MicroShift to provide additional cluster and application services.
1.1. Installing optional packages 링크 복사링크가 클립보드에 복사되었습니다!
When you install MicroShift, optional RPM packages can be added. Examples of optional RPMs include those designed to expand your network, add and manage operators, and manage applications. Use the following procedures to add the packages that you need.
1.1.1. Installing the GitOps Argo CD manifests from an RPM package 링크 복사링크가 클립보드에 복사되었습니다!
You can use a lightweight version of OpenShift GitOps with MicroShift to help manage your applications by installing the microshift-gitops
RPM package. The microshift-gitops
RPM package includes the necessary manifests to run core Argo CD.
The Argo CD CLI is not available on MicroShift. This process installs basic GitOps functions.
Prerequisites
- You installed MicroShift version 4.14 or later.
- You configured 250MB RAM of additional storage.
Procedure
Enable the GitOps repository with the subscription manager by running the following command:
sudo subscription-manager repos --enable=gitops-1.16-for-rhel-9-$(uname -m)-rpms
$ sudo subscription-manager repos --enable=gitops-1.16-for-rhel-9-$(uname -m)-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the MicroShift GitOps package by running the following command:
sudo dnf install -y microshift-gitops
$ sudo dnf install -y microshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy Argo CD pods, restart MicroShift by running the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can verify that your pods are running properly by entering the following command:
oc get pods -n openshift-gitops
$ oc get pods -n openshift-gitops
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 4m11s argocd-redis-56844446bc-dzmhf 1/1 Running 0 4m12s argocd-repo-server-57b4f896cf-7qk8l 1/1 Running 0 4m12s
NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 4m11s argocd-redis-56844446bc-dzmhf 1/1 Running 0 4m12s argocd-repo-server-57b4f896cf-7qk8l 1/1 Running 0 4m12s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2. Installing the multiple networks plugin 링크 복사링크가 클립보드에 복사되었습니다!
You can install the MicroShift Multus Container Network Interface (CNI) plugin alongside a new MicroShift installation. If you want to attach additional networks to a pod for high-performance network configurations, install the microshift-multus
RPM package.
The MicroShift Multus CNI plugin manifests are included in the MicroShift binary. To enable multiple networks, you can either set the value in the MicroShift config.yaml
file to Enabled
, or use the configuration snippet in the microshift-multus
RPM. Uninstalling the MicroShift Multus CNI is not supported in either case.
Procedure
Install the Multus RPM package by running the following command:
sudo dnf install microshift-multus
$ sudo dnf install microshift-multus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipIf you create your custom resources (CRs) while you are completing your installation of MicroShift, you can avoid restarting the service to apply them.
Next steps
- Continue with your new MicroShift installation, including any add-ons.
- Create the custom resources (CRs) needed for your MicroShift Multus CNI plugin.
- Configure other networking CNIs as needed.
- After you have finished installing all of the RPMs that you want to include, start the MicroShift service. The MicroShift Multus CNI plugin is automatically deployed.
1.1.3. Installing the Operator Lifecycle Manager (OLM) from an RPM package 링크 복사링크가 클립보드에 복사되었습니다!
When you install MicroShift, the Operator Lifecycle Manager (OLM) package is not installed by default. You can install the OLM on your MicroShift instance using an RPM package.
Procedure
Install the OLM package by running the following command:
sudo dnf install microshift-olm
$ sudo dnf install microshift-olm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To apply the manifest from the package to an active cluster, run the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.4. Installing and enabling MicroShift Observability 링크 복사링크가 클립보드에 복사되었습니다!
You can install MicroShift Observability at any time, including during the initial MicroShift installation.
Procedure
Install the
microshift-observability
RPM by entering the following command:sudo dnf install microshift-observability
$ sudo dnf install microshift-observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
microshift-observability
system service by entering the following command:sudo systemctl enable microshift-observability
$ sudo systemctl enable microshift-observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
microshift-observability
system service by entering the following command:sudo systemctl start microshift-observability
$ sudo systemctl start microshift-observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart MicroShift after the initial installation.
sudo systemctl restart microshift-observability
$ sudo systemctl restart microshift-observability
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The installation is successful if there is no output after you start the microshift-observability
RPM.
1.1.5. Installing the Red Hat OpenShift AI RPM 링크 복사링크가 클립보드에 복사되었습니다!
To use AI models in MicroShift deployments, use this procedure to install the Red Hat OpenShift AI (Red Hat OpenShift AI Self-Managed) RPM with a new MicroShift installation. You can also install the RPM on an existing MicroShift instance if you restart the system.
Red Hat OpenShift AI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The system requirements for installing MicroShift have been met.
- You have root user access to your machine.
-
The OpenShift CLI (
oc
) is installed. - You configured your LVM VG with the capacity needed for the PVs of your workload.
- You have the RAM and disk space required for your AI model.
- You configured the required accelerators, hardware, operating system, and MicroShift to provide the resources your model needs.
- Your AI model is ready to use.
The microshift-ai-model-serving
RPM contains manifests that deploy kserve
, with the raw deployment mode enabled, and ServingRuntimes
objects in the redhat-ods-applications
namespace.
Procedure
Install the MicroShift AI-model-serving RPM package by running the following command:
sudo dnf install microshift-ai-model-serving
$ sudo dnf install microshift-ai-model-serving
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As a root user, restart the MicroShift service by entering the following command:
sudo systemctl restart microshift
$ sudo systemctl restart microshift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Install the release information package by running the following command:
sudo dnf install microshift-ai-model-serving-release-info
$ sudo dnf install microshift-ai-model-serving-release-info
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The release information package contains a JSON file with image references useful for offline procedures or deploying copy of a
ServingRuntime
to your namespace during a bootc image build.
Verification
Verify that the
kserve
pod is running in theredhat-ods-applications
namespace by entering the following command:oc get pods -n redhat-ods-applications
$ oc get pods -n redhat-ods-applications
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE kserve-controller-manager-7fc9fc688-kttmm 1/1 Running 0 1h
NAME READY STATUS RESTARTS AGE kserve-controller-manager-7fc9fc688-kttmm 1/1 Running 0 1h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
- Create a namespace for your AI model.
- Package your model into an OCI image.
- Configure a model-serving runtime.
- Verify that your model is ready for inferencing.
- Make requests against the model server.