Search

Part I. Installing Local Storage Operator

download PDF

Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Type local storage in the Filter by keyword…​ box to find the Local Storage Operator from the list of operators and select the same.
  4. Set the following options on the Install Operator page:

    1. Update channel as stable.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-local-storage.
    4. Approval Strategy as Automatic.
  5. Click Install.

Verification steps

  • Verify that the Local Storage Operator shows a green tick indicating successful installation.

1. Installing Red Hat OpenShift Data Foundation Operator

You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub.

For information about the hardware and software requirements, see Planning your deployment.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions.
  • You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):
$ oc annotate namespace openshift-storage openshift.io/node-selector=

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.15.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
  6. Select Approval Strategy as Automatic or Manual.

    If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

    If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

  7. Ensure that the Enable option is selected for the Console plugin.
  8. Click Install.

Verification steps

  • Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
  • After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.

    • In the Web Console, navigate to Storage and verify if Data Foundation is available.

2. Creating standalone Multicloud Object Gateway on IBM Z

You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation.

Prerequisites

  • Ensure that the OpenShift Data Foundation Operator is installed.
  • (For deploying using local storage devices only) Ensure that Local Storage Operator is installed.

To identify storage devices on each node, see Finding available storage devices.

Procedure

  1. Log into the OpenShift Web Console.
  2. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators.
  3. Click the Local Storage installed operator.
  4. On the Operator Details page, click the Local Volume link.
  5. Click Create Local Volume.
  6. Click on YAML view for configuring Local Volume.
  7. Define a LocalVolume custom resource for filesystem PVs using the following YAML.

    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: localblock
      namespace: openshift-local-storage
    spec:
      logLevel: Normal
      managementState: Managed
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - worker-0
                  - worker-1
                  - worker-2
      storageClassDevices:
        - devicePaths:
            - /dev/sda
          storageClassName: localblock
          volumeMode: Filesystem

    The above definition selects sda local device from the worker-0, worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda.

    Important

    Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths.

  8. Click Create.
  9. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  10. Click OpenShift Data Foundation operator and then click Create StorageSystem.
  11. In the Backing storage page, select Multicloud Object Gateway for Deployment type.
  12. Select the Use an existing StorageClass option for Backing storage type.

    1. Select the Storage Class that you used while installing LocalVolume.
  13. Click Next.
  14. Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

    1. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
    2. Select an Authentication Method.

      Using Token authentication method
      • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
      • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

        • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
        • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
        • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key.
        • Click Save and skip to step iv.
      Using Kubernetes authentication method
      • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
      • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

        • Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
        • Optional: Enter TLS Server Name and Authentication Path if applicable.
        • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
        • Click Save and skip to step iv.
    3. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:

      1. Enter a unique Connection Name for the Key Management service within the project.
      2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

        • Address: 123.34.3.2
        • Port: 5696
      3. Upload the Client Certificate, CA certificate, and Client Private Key.
      4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
      5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
    4. Select a Network.
    5. Click Next.
  15. In the Review and create page, review the configuration details:

    To modify any configuration settings, click Back.

  16. Click Create StorageSystem.

Verification steps

Verifying that the OpenShift Data Foundation cluster is healthy
  1. In the OpenShift Web Console, click Storage Data Foundation.
  2. Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.
Verifying the state of the pods
  1. Click Workloads Pods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    ComponentCorresponding pods

    OpenShift Data Foundation Operator

    • ocs-operator-* (1 pod on any storage node)
    • ocs-metrics-exporter-* (1 pod on any storage node)
    • odf-operator-controller-manager-* (1 pod on any storage node)
    • odf-console-* (1 pod on any storage node)
    • csi-addons-controller-manager-* (1 pod on any storage node)

    Rook-ceph Operator

    rook-ceph-operator-*

    (1 pod on any storage node)

    Multicloud Object Gateway

    • noobaa-operator-* (1 pod on any storage node)
    • noobaa-core-* (1 pod on any storage node)
    • noobaa-db-pg-* (1 pod on any storage node)
    • noobaa-endpoint-* (1 pod on any storage node)
    • noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node)
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.