Chapter 2. Deploy OpenShift Data Foundation using local storage devices


You can deploy OpenShift Data Foundation on bare metal infrastructure where OpenShift Container Platform is already installed.

Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.

Perform the following steps to deploy OpenShift Data Foundation:

2.1. Installing Local Storage Operator

Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Type local storage in the Filter by keyword​ box to find the Local Storage Operator from the list of operators, and click on it.
  4. Set the following options on the Install Operator page:

    1. Update channel as stable.
    2. Installation mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-local-storage.
    4. Update approval as Automatic.
  5. Click Install.

Verification steps

  • Verify that the Local Storage Operator shows a green tick indicating successful installation.

You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions.
  • You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
  • For additional resource requirements, see the Planning your deployment guide.
Important
  • When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case):

    $ oc annotate namespace openshift-storage openshift.io/node-selector=
    Copy to Clipboard Toggle word wrap
  • Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.

Procedure

  1. Log in to the OpenShift Web Console.
  2. Click Operators OperatorHub.
  3. Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator.
  4. Click Install.
  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.17.
    2. Installation Mode as A specific namespace on the cluster.
    3. Installed Namespace as Operator recommended namespace openshift-storage. If Namespace openshift-storage does not exist, it is created during the operator installation.
    4. Select Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

    5. Ensure that the Enable option is selected for the Console plugin.
    6. Click Install.

Verification steps

  • After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
  • In the Web Console:

    • Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
    • Navigate to Storage and verify if the Data Foundation dashboard is available.

You can enable the key value backend path and policy in the vault for token authentication.

Prerequisites

  • Administrator access to the vault.
  • A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
  • Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later.

Procedure

  1. Enable the Key/Value (KV) backend path in the vault.

    For vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv
    Copy to Clipboard Toggle word wrap

    For vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
    Copy to Clipboard Toggle word wrap
  2. Create a policy to restrict the users to perform a write or delete operation on the secret:

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
    Copy to Clipboard Toggle word wrap
  3. Create a token that matches the above policy:

    $ vault token create -policy=odf -format json
    Copy to Clipboard Toggle word wrap

You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).

Prerequisites

  • Administrator access to Vault.
  • A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
  • The OpenShift Data Foundation operator must be installed from the Operator Hub.
  • Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later.

Procedure

  1. Create a service account:

    $ oc -n openshift-storage create serviceaccount <serviceaccount_name>
    Copy to Clipboard Toggle word wrap

    where, <serviceaccount_name> specifies the name of the service account.

    For example:

    $ oc -n openshift-storage create serviceaccount odf-vault-auth
    Copy to Clipboard Toggle word wrap
  2. Create clusterrolebindings and clusterroles:

    $ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
    Copy to Clipboard Toggle word wrap

    For example:

    $ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
    Copy to Clipboard Toggle word wrap
  3. Create a secret for the serviceaccount token and CA certificate.

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: odf-vault-auth-token
      namespace: openshift-storage
      annotations:
        kubernetes.io/service-account.name: <serviceaccount_name>
    type: kubernetes.io/service-account-token
    data: {}
    EOF
    Copy to Clipboard Toggle word wrap

    where, <serviceaccount_name> is the service account created in the earlier step.

  4. Get the token and the CA certificate from the secret.

    $ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo)
    $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
    Copy to Clipboard Toggle word wrap
  5. Retrieve the OCP cluster endpoint.

    $ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
    Copy to Clipboard Toggle word wrap
  6. Fetch the service account issuer:

    $ oc proxy &
    $ proxy_pid=$!
    $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)"
    $ kill $proxy_pid
    Copy to Clipboard Toggle word wrap
  7. Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:

    $ vault auth enable kubernetes
    Copy to Clipboard Toggle word wrap
    $ vault write auth/kubernetes/config \
              token_reviewer_jwt="$SA_JWT_TOKEN" \
              kubernetes_host="$OCP_HOST" \
              kubernetes_ca_cert="$SA_CA_CRT" \
              issuer="$issuer"
    Copy to Clipboard Toggle word wrap
    Important

    To configure the Kubernetes authentication method in Vault when the issuer is empty:

    $ vault write auth/kubernetes/config \
              token_reviewer_jwt="$SA_JWT_TOKEN" \
              kubernetes_host="$OCP_HOST" \
              kubernetes_ca_cert="$SA_CA_CRT"
    Copy to Clipboard Toggle word wrap
  8. Enable the Key/Value (KV) backend path in Vault.

    For Vault KV secret engine API, version 1:

    $ vault secrets enable -path=odf kv
    Copy to Clipboard Toggle word wrap

    For Vault KV secret engine API, version 2:

    $ vault secrets enable -path=odf kv-v2
    Copy to Clipboard Toggle word wrap
  9. Create a policy to restrict the users to perform a write or delete operation on the secret:

    echo '
    path "odf/*" {
      capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "sys/mounts" {
    capabilities = ["read"]
    }'| vault policy write odf -
    Copy to Clipboard Toggle word wrap
  10. Generate the roles:

    $ vault write auth/kubernetes/role/odf-rook-ceph-op \
            bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1440h
    Copy to Clipboard Toggle word wrap

    The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system.

    $ vault write auth/kubernetes/role/odf-rook-ceph-osd \
            bound_service_account_names=rook-ceph-osd \
            bound_service_account_namespaces=openshift-storage \
            policies=odf \
            ttl=1440h
    Copy to Clipboard Toggle word wrap

2.4.1. Enabling key rotation when using KMS

Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure.

To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace, StorageClass, or PersistentVolumeClaims (in order of precedence).

<value> can be either @hourly, @daily, @weekly, @monthly, or @yearly. If <value> is empty, the default is @weekly. The below examples use @weekly.

Important

Key rotation is only supported for RBD backed volumes.

Annotating Namespace

$ oc get namespace default
NAME      STATUS   AGE
default   Active   5d2h
Copy to Clipboard Toggle word wrap
$ oc annotate namespace default "keyrotation.csiaddons.openshift.io/schedule=@weekly"
namespace/default annotated
Copy to Clipboard Toggle word wrap

Annotating StorageClass

$ oc get storageclass rbd-sc
NAME       PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rbd-sc     rbd.csi.ceph.com   Delete          Immediate           true                   5d2h
Copy to Clipboard Toggle word wrap
$ oc annotate storageclass rbd-sc "keyrotation.csiaddons.openshift.io/schedule=@weekly"
storageclass.storage.k8s.io/rbd-sc annotated
Copy to Clipboard Toggle word wrap

Annotating PersistentVolumeClaims

$ oc get pvc data-pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-pvc  Bound    pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74   1Gi        RWO            default           20h
Copy to Clipboard Toggle word wrap
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=@weekly"
persistentvolumeclaim/data-pvc annotated
Copy to Clipboard Toggle word wrap
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io
NAME                    SCHEDULE    SUSPEND   ACTIVE   LASTSCHEDULE   AGE
data-pvc-1642663516   @weekly                                     3s
Copy to Clipboard Toggle word wrap
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *" --overwrite=true
persistentvolumeclaim/data-pvc annotated
Copy to Clipboard Toggle word wrap
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io
NAME                  SCHEDULE    SUSPEND   ACTIVE   LASTSCHEDULE   AGE
data-pvc-1642664617   */1 * * * *                                   3s
Copy to Clipboard Toggle word wrap

Prerequisites

Procedure

  1. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  2. Click on the OpenShift Data Foundation operator, and then click Create StorageSystem.
  3. In the Backing storage page, perform the following:

    1. Select Full Deployment for the Deployment type option.
    2. Select the Create a new StorageClass using the local storage devices option.
    3. Optional: Select Use Ceph RBD as the default StorageClass. This avoids having to manually annotate a StorageClass.
    4. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview].

      This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure.

      Important

      OpenShift Data Foundation ships PostgreSQL images maintained by Red Hat, which are used to store metadata for the Multicloud Object Gateway. This PostgreSQL usage is at the application level.

      As a result, OpenShift Data Foundation does not perform database-level optimizations or in-depth insights.

      If customers have their own PostgreSQL that is well-maintained and optimized, we recommend using it. OpenShift Data Foundation supports external PostgreSQL instances.

      Any PostgreSQL-related issues requiring code changes or deep technical analysis may need to be addressed upstream. This could result in longer resolution times.

      1. Provide the following connection details:

        • Username
        • Password
        • Server name and Port
        • Database name
      2. Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
    5. Click Next.

      Important

      You are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in Installing Local Storage Operator.

  4. In the Create local volume set page, provide the following information:

    1. Enter a name for the LocalVolumeSet and the StorageClass.

      The local volume set name appears as the default value for the storage class name. You can change the name.

    2. Select one of the following:

      • Disks on all nodes

        Uses the available disks that match the selected filters on all the nodes.

      • Disks on selected nodes

        Uses the available disks that match the selected filters only on the selected nodes.

        Important
        • The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones.

          For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled.

        • Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
        • If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.

          For minimum starting node requirements, see the Resource requirements section in the Planning guide.

    3. From the available list of Disk Type, select SSD/NVMe.
    4. Expand the Advanced section and set the following options:

      Volume Mode

      Block is selected as the default value.

      Device Type

      Select one or more device types from the dropdown list.

      Disk Size

      Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.

      Maximum Disks Limit

      This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.

    5. Click Next.

      A pop-up to confirm the creation of LocalVolumeSet is displayed.

    6. Click Yes to continue.
  5. In the Capacity and nodes page, configure the following:

    1. Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
    2. In the Configure performance section, select one of the following performance profiles:

      • Lean

        Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.

      • Balanced (default)

        Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.

      • Performance

        Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.

        Note

        You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab.

        Important

        Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.

        For more information about resource requirements, see Resource requirement for performance profiles.

    3. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
    4. Click Next.
  6. Optional: In the Security and network page, configure the following based on your requirement:

    1. To enable encryption, select Enable data encryption for block and file storage.
    2. Select one or both of the following Encryption level:

      • Cluster-wide encryption

        Encrypts the entire cluster (block and file).

      • StorageClass encryption

        Creates encrypted persistent volume (block only) using encryption enabled storage class.

    3. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.

      1. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details:

        • Vault

          1. Select an Authentication Method.

            • Using Token authentication method

              • Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
              • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
              • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
              • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
              • Click Save.
            • Using Kubernetes authentication method

              • Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
              • Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

                • Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
                • Optional: Enter TLS Server Name and Authentication Path if applicable.
                • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
              • Click Save.
        • Thales CipherTrust Manager (using KMIP)

          1. Enter a unique Connection Name for the Key Management service within the project.
          2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

            • Address: 123.34.3.2
            • Port: 5696
          3. Upload the Client Certificate, CA certificate, and Client Private Key.
          4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
          5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
      2. Select a Network.
    4. Select one of the following:

      • Default (SDN)

        If you are using a single network.

      • Custom (Multus)

        If you are using multiple network interfaces.

        1. Select a Public Network Interface from the dropdown.
        2. Select a Cluster Network Interface from the dropdown.

          Note

          If you are using only one additional network interface, select the single NetworkAttachementDefinition, that is,ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank.

    5. Click Next.
  7. In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click Next.
  8. In the Review and create page, review the configuration details.

    To modify any configuration settings, click Back to go back to the previous configuration page.

  9. Click Create StorageSystem.
Note

When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.

Verification steps

  • To verify the final Status of the installed storage cluster:

    1. In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System
    2. Click ocs-storagecluster-storagesystem Resources.
    3. Verify that the Status of the StorageCluster is Ready and has a green tick mark next to it.
  • To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled):

    1. In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System
    2. Click ocs-storagecluster-storagesystem Resources ocs-storagecluster.
    3. In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled:

      spec:
      flexibleScaling: true
      […]
      status:
      failureDomain: host
      Copy to Clipboard Toggle word wrap

Additional resources

  • To expand the capacity of the initial cluster, see the Scaling Storage guide.

To verify that OpenShift Data Foundation is deployed correctly:

2.6.1. Verifying the state of the pods

Procedure

  1. Click Workloads Pods from the OpenShift Web Console.
  2. Select openshift-storage from the Project drop-down list.

    Note

    If the Show default projects option is disabled, use the toggle button to list all the default projects.

    For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table:

  1. Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state:
Expand

Component

Corresponding pods

OpenShift Data Foundation Operator

  • ocs-operator-* (1 pod on any storage node)
  • ocs-metrics-exporter-* (1 pod on any storage node)
  • odf-operator-controller-manager-* (1 pod on any storage node)
  • odf-console-* (1 pod on any storage node)
  • csi-addons-controller-manager-* (1 pod on any storage node)

Rook-ceph Operator

rook-ceph-operator-*

(1 pod on any storage node)

Multicloud Object Gateway

  • noobaa-operator-* (1 pod on any storage node)
  • noobaa-core-* (1 pod on any storage node)
  • noobaa-db-pg-* (1 pod on any storage node)
  • noobaa-endpoint-* (1 pod on any storage node)

MON

rook-ceph-mon-*

(3 pods distributed across storage nodes)

MGR

rook-ceph-mgr-*

(1 pod on any storage node)

MDS

rook-ceph-mds-ocs-storagecluster-cephfilesystem-*

(2 pods distributed across storage nodes)

RGW

rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node)

CSI

  • cephfs

    • csi-cephfsplugin-* (1 pod on each storage node)
    • csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes)
  • rbd

    • csi-rbdplugin-* (1 pod on each storage node)
    • csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes)

rook-ceph-crashcollector

rook-ceph-crashcollector-*

(1 pod on each storage node)

OSD

  • rook-ceph-osd-* (1 pod for each device)

Procedure

  1. In the OpenShift Web Console, click Storage Data Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
  3. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
  4. In the Details card, verify that the cluster information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.

Procedure

  1. In the OpenShift Web Console, click Storage Data Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.

    1. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
    2. In the Details card, verify that the MCG information is displayed.

For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.

Important

The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article.

Procedure

  1. Click Storage Storage Classes from the left pane of the OpenShift Web Console.
  2. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io
    • ocs-storagecluster-ceph-rgw

2.6.5. Verifying the Multus networking

To determine if Multus is working in your cluster, verify the Multus networking.

Procedure

Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following:

  • If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs.
  • If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs.

To verify the network configuration is correct, complete the following:

In the OpenShift console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster.

In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic.

Sample output:

[..]
spec:
  [..]
  network:
    ipFamily: IPv4
    provider: multus
    selectors:
      cluster: openshift-storage/ocs-cluster
      public: openshift-storage/ocs-public
  [..]
Copy to Clipboard Toggle word wrap

To verify the network configuration is correct using the command line interface, run the following commands:

$ oc get storagecluster ocs-storagecluster \
-n openshift-storage \
-o=jsonpath='{.spec.network}{"\n"}'
Copy to Clipboard Toggle word wrap

Sample output:

{"ipFamily":"IPv4","provider":"multus","selectors":{"cluster":"openshift-storage/ocs-cluster","public":"openshift-storage/ocs-public"}}
Copy to Clipboard Toggle word wrap

Confirm the OSD pods are using correct network

In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic.

Note

Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network.

$ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}'
Copy to Clipboard Toggle word wrap

Sample output:

[{
    "name": "openshift-sdn",
    "interface": "eth0",
    "ips": [
        "10.129.2.30"
    ],
    "default": true,
    "dns": {}
},{
    "name": "openshift-storage/ocs-cluster",
    "interface": "net1",
    "ips": [
        "192.168.2.1"
    ],
    "mac": "e2:04:c6:81:52:f1",
    "dns": {}
},{
    "name": "openshift-storage/ocs-public",
    "interface": "net2",
    "ips": [
        "192.168.1.1"
    ],
    "mac": "ee:a0:b6:a4:07:94",
    "dns": {}
}]
Copy to Clipboard Toggle word wrap

To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility):

$ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}' | jq -r '.[].name'
Copy to Clipboard Toggle word wrap

Sample output:

openshift-sdn
openshift-storage/ocs-cluster
openshift-storage/ocs-public
Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat