Chapter 4. Verifying OpenShift Data Foundation deployment
Use this section to verify that OpenShift Data Foundation is deployed correctly.
4.1. Verifying the state of the pods
Procedure
-
Click Workloads
Pods from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 4.1, “Pods corresponding to OpenShift Data Foundation cluster”.
Click the Running and Completed tabs to verify that the following pods are in
Running
andCompleted
state:Table 4.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any worker node) -
ocs-metrics-exporter-*
(1 pod on any worker node) -
odf-operator-controller-manager-*
(1 pod on any worker node) -
odf-console-*
(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any worker node) -
noobaa-core-*
(1 pod on any storage node) -
noobaa-db-pg-*
(1 pod on any storage node) -
noobaa-endpoint-*
(1 pod on any storage node)
MON
rook-ceph-mon-*
(3 pods distributed across storage nodes)
MGR
rook-ceph-mgr-*
(1 pod on any storage node)
MDS
rook-ceph-mds-ocs-storagecluster-cephfilesystem-*
(2 pods distributed across storage nodes)
RGW
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-*
(1 pod on any storage node)CSI
cephfs
-
csi-cephfsplugin-*
(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*
(2 pods distributed across worker nodes)
-
rbd
-
csi-rbdplugin-*
(1 pod on each worker node) -
csi-rbdplugin-provisioner-*
(2 pods distributed across worker nodes)
-
rook-ceph-crashcollector
rook-ceph-crashcollector-*
(1 pod on each storage node)
OSD
-
rook-ceph-osd-*
(1 pod for each device) -
rook-ceph-osd-prepare-ocs-deviceset-*
(1 pod for each device)
-
4.2. Verifying the OpenShift Data Foundation cluster is healthy
Procedure
-
In the OpenShift Web Console, click Storage
OpenShift Data Foundation. - In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
4.3. Verifying the Multicloud Object Gateway is healthy
Procedure
-
In the OpenShift Web Console, click Storage
OpenShift Data Foundation. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.
4.4. Verifying that the OpenShift Data Foundation specific storage classes exist
Procedure
-
Click Storage
Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-storagecluster-ceph-rbd
-
ocs-storagecluster-cephfs
-
openshift-storage.noobaa.io
-
ocs-storagecluster-ceph-rgw
-
4.5. Verifying the Multus networking
To determine if Multus is working in your cluster, verify the Multus networking.
Procedure
Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following:
-
If only a single NetworkAttachmentDefinition (for example,
ocs-public-cluster
) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. -
If both NetworkAttachmentDefinitions (for example,
ocs-public
andocs-cluster
) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs.
To verify the network configuration is correct, complete the following:
In the OpenShift console, navigate to Installed Operators
In the YAML tab, search for network
in the spec
section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic.
Sample output:
[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]
To verify the network configuration is correct using the command line interface, run the following commands:
$ oc get storagecluster ocs-storagecluster \ -n openshift-storage \ -o=jsonpath='{.spec.network}{"\n"}'
Sample output:
{"ipFamily":"IPv4","provider":"multus","selectors":{"cluster":"openshift-storage/ocs-cluster","public":"openshift-storage/ocs-public"}}
Confirm the OSD pods are using correct network
In the openshift-storage
namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic.
Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network.
$ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}'
Sample output:
[{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.2.30" ], "default": true, "dns": {} },{ "name": "openshift-storage/ocs-cluster", "interface": "net1", "ips": [ "192.168.2.1" ], "mac": "e2:04:c6:81:52:f1", "dns": {} },{ "name": "openshift-storage/ocs-public", "interface": "net2", "ips": [ "192.168.1.1" ], "mac": "ee:a0:b6:a4:07:94", "dns": {} }]
To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility):
$ oc get -n openshift-storage $(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}' | jq -r '.[].name'
Sample output:
openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public