Search

Chapter 3. Deploy using local storage devices

download PDF

Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.

Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed.

Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the next steps.

3.1. Installing Local Storage Operator

Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Type local storage in the Filter by keyword​ box to find the Local Storage Operator from the list of operators, and click on it.
  4. Set the following options on the Install Operator page:

    1. Update channel as either 4.12 or stable.
    2. Installation mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-local-storage.
    4. Update approval as Automatic.
  5. Click Install.

Verification steps

  • Verify that the Local Storage Operator shows a green tick indicating successful installation.

3.2. Installing Red Hat OpenShift Data Foundation Operator

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions.
  • You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.12.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if Data Foundation dashboard is available.

3.3. Creating Multus networks [Technology Preview]

OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster.

You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached.

You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created.

OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address.

Important

Multus support is a Technology Preview feature that is only supported and has been tested on bare metal and VMWare deployments. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information, see Technology Preview Features Support Scope.

3.3.1. Creating network attachment definitions

To utilize Multus, an already working cluster with the correct networking configuration is required, see Recommended network configuration and requirements for a Multus configuration. The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster.

You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster.

As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster).

The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface):

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ocs-public-cluster
  namespace: openshift-storage
spec:
  config: '{
  	"cniVersion": "0.3.1",
  	"type": "macvlan",
  	"master": "ens2",
  	"mode": "bridge",
  	"ipam": {
    	    "type": "whereabouts",
    	    "range": "192.168.1.0/24"
  	}
  }'
Note

All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster).

The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting Object Storge Device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface):

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ocs-public
  namespace: openshift-storage
spec:
  config: '{
  	"cniVersion": "0.3.1",
  	"type": "macvlan",
  	"master": "ens2",
  	"mode": "bridge",
  	"ipam": {
    	    "type": "whereabouts",
    	    "range": "192.168.1.0/24"
  	}
  }'

Example NetworkAttachmentDefinition:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ocs-cluster
  namespace: openshift-storage
spec:
  config: '{
  	"cniVersion": "0.3.1",
  	"type": "macvlan",
  	"master": "ens3",
  	"mode": "bridge",
  	"ipam": {
    	    "type": "whereabouts",
    	    "range": "192.168.2.0/24"
  	}
  }'
Note

All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public, and ens3 for ocs-cluster).

3.4. Creating OpenShift Data Foundation cluster on VMware vSphere

VMware vSphere supports the following three types of local storage:

  • Virtual machine disk (VMDK)
  • Raw device mapping (RDM)
  • VMDirectPath I/O

Prerequisites

Procedure

  1. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  2. Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
  3. In the Backing storage page, perform the following:

    1. Select Full Deployment for the Deployment type option.
    2. Select the Create a new StorageClass using the local storage devices option.
    3. Click Next.

      Note

      You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follows procedure as described in Installing Local Storage Operator.

  4. In the Create local volume set page, provide the following information:

    1. Enter a name for the LocalVolumeSet and the StorageClass.

      By default, the local volume set name appears for the storage class name. You can change the name.

    2. Select one of the following:

      • Disks on all nodes to use the available disks that match the selected filters on all nodes.
      • Disks on selected nodes to use the available disks that match the selected filters only on selected nodes.

        Important
        • The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes are spread across fewer than the minimum requirement of 3 availability zones.

          For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled.

        • Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
        • If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.

          For minimum starting node requirements, see Resource requirements section in Planning guide.

    3. From the available list of Disk Type, select SSD/NVMe.
    4. Expand the Advanced section and set the following options:

      Volume Mode

      Block is selected by default.

      Device Type

      Select one or more device type from the dropdown list.

      Disk Size

      Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.

      Maximum Disks Limit

      This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.

    5. Click Next.

      A pop-up to confirm the creation of LocalVolumeSet is displayed.

    6. Click Yes to continue.
  5. In the Capacity and nodes page, configure the following:

    1. Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
    2. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
    3. Click Next.
  6. Optional: In the Security and network page, configure the following based on your requirement:

    1. To enable encryption, select Enable data encryption for block and file storage.
    2. Select one of the following Encryption level:

      • Cluster-wide encryption to encrypt the entire cluster (block and file).
      • StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
    3. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

      1. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
      2. Select an Authentication Method.

        Using Token authentication method
        • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
        • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

          • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
          • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
          • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
          • Click Save and skip to step iv.
        Using Kubernetes authentication method
        • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
        • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

          • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
          • Optional: Enter TLS Server Name and Authentication Path if applicable.
          • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
          • Click Save and skip to step iv.
      3. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:

        1. Enter a unique Connection Name for the Key Management service within the project.
        2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

          • Address: 123.34.3.2
          • Port: 5696
        3. Upload the Client Certificate, CA certificate, and Client Private Key.
        4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
        5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
      4. Select a Network.
    4. Select one of the following:

      • Select Default (SDN) if you are using a single network.
      • Select Custom (Multus) if you are using multiple network interfaces.

        1. Select a Public Network Interface from the dropdown.
        2. Select a Cluster Network Interface from the dropdown.

          Note

          If you are using only one additional network interface, select the single NetworkAttachementDefinition, that is,ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank.

    5. Click Next.
  7. In the Review and create page, review the configuration details.

    • To modify any configuration settings, click Back to go back to the previous configuration page.
  8. Click Create StorageSystem.

Verification steps

  • To verify the final Status of the installed storage cluster:

    1. In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources.
    2. Verify that Status of StorageCluster is Ready and has a green tick mark next to it.
  • To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled):

    1. In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources.
    2. In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled.

      spec:
      flexibleScaling: true
      […]
      status:
      failureDomain: host

Additional resources

  • To expand the capacity of the initial cluster, see Scaling Storage guide.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.