5.2. 为新设备集附加存储
流程
-
在 OpenShift Web 控制台中,导航到 Storage
Data Foundation Storage Systems 选项卡。 - 单击所需存储系统旁边的 Action 菜单,然后选择 Attach Storage。
- 从 LSO StorageClass 中选择新创建的本地存储类。
选择 Enable encryption on device set to enable encryption。
注意使用此选项启用或禁用 OSD 加密会覆盖这些新 OSD 存储设备的集群范围加密设置。
- 选择 卷类型。
- 输入池的名称。
选择 数据保护策略。
建议选择
三向复制。- 选择 Compression 选项,通过在副本中启用数据压缩来优化存储效率。
- 选择 Reclaim Policy。
- 选择 卷绑定模式。
- 输入 New User Storage Class 名称。
- 选择 Enable Encryption 选项,为使用存储类创建的每个持久性卷生成加密密钥。
- 单击 Attach Storage。
验证步骤
验证 PV 和 PVC 是否处于
Bound状态。例如:
$ oc get pv | grep localvolume2 local-pv-14c0b1d 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-0kp29f localvolume2 <unset> 31m local-pv-41d0d077 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-2vwk54 localvolume2 <unset> 30m local-pv-6c57a345 130Gi RWO Delete Bound openshift-storage/localvolume2-0-data-1255ts localvolume2 <unset> 28m$ oc get pvc | grep localvolume2 localvolume2-0-data-0kp29f Bound local-pv-14c0b1d 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-1255ts Bound local-pv-6c57a345 130Gi RWO localvolume2 <unset> 19m localvolume2-0-data-2vwk54 Bound local-pv-41d0d077 130Gi RWO localvolume2 <unset> 19m验证新 OSD 是否已成功创建,并且所有 OSD 都在运行。
例如:
$ oc -n openshift-storage get pods -l app=rook-ceph-osd NAME READY STATUS RESTARTS AGE rook-ceph-osd-0-7899b89478-bmg2l 2/2 Running 2 16h rook-ceph-osd-1-6d7df8dbfc-z29jx 2/2 Running 2 16h rook-ceph-osd-2-66b9dc8cd7-vd77d 2/2 Running 2 16h rook-ceph-osd-3-79b59f44d7-cb9m7 2/2 Running 0 15m rook-ceph-osd-4-766bcd8646-z4zvz 2/2 Running 0 15m rook-ceph-osd-5-845554d78-qqcm4 2/2 Running 0 15m验证 storagecluster 处于
Ready状态,并且 Ceph 集群健康状态为OK。例如:
$ oc -n openshift-storage get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 2d2h Ready 2025-02-11T12:03:37Z 4.18.0 $ oc -n openshift-storage get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID ocs-storagecluster-cephcluster /var/lib/rook 3 2d2h Ready Cluster created successfully HEALTH_OK 8ec4e898-5386-42c7-b01b-169fe8f08ba4在 Ceph 集群中,检查 Ceph OSD 树,以查看新设备类是否已正确分散,并且 OSD 是否为
up。例如:
$ oc rsh -n openshift-storage $(oc get pods -o wide -n openshift-storage|grep tool|awk '{print$1}') ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.67406 root default -7 0.22469 host compute-0 3 localvolume2 0.12700 osd.3 up 1.00000 1.00000 0 ssd 0.09769 osd.0 up 1.00000 1.00000 -3 0.22469 host compute-1 4 localvolume2 0.12700 osd.4 up 1.00000 1.00000 2 ssd 0.09769 osd.2 up 1.00000 1.00000 -5 0.22469 host compute-2 5 localvolume2 0.12700 osd.5 up 1.00000 1.00000 1 ssd 0.09769 osd.1 up 1.00000 1.00000验证是否已创建用户 storageclass。
例如:
$ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 16h localvolume2 kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 63m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 16h ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 16h ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 16h openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 16h ssd2 openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 16m thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 17h thin-csi-odf csi.vsphere.vmware.com Delete WaitForFirstConsumer true 16h