3.4. 在 RBD 和 CephFS 卷中启用元数据
您可以在 RADOS 块设备(RBD)和 CephFS 卷中设置持久性卷声明(PVC)、持久性卷(PV)和命名空间名称,以用于监控目的。这可让您读取 RBD 和 CephFS 元数据,以识别 OpenShift Container Platform 和 RBD 和 CephFS 卷之间的映射。
要启用 RADOS 块设备(RBD)和 CephFS 卷元数据功能,您需要在 rook-ceph-operator-config
configmap
中设置 CSI_ENABLE_METADATA
变量。默认情况下禁用此功能。如果您在从以前的版本升级后启用了这个功能,现有的 PVC 将不会包含元数据。另外,当启用元数据功能时,在启用前创建的 PVC 没有元数据。
先决条件
-
确保安装
ocs_operator
并为 Operator 创建storagecluster
。 确保
storagecluster
处于Ready
状态。$ oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 57m Ready 2022-08-30T06:52:58Z 4.12.0
流程
编辑
rook-ceph
operatorConfigMap
,将CSI_ENABLE_METADATA
标记为true
。$ oc patch cm rook-ceph-operator-config -n openshift-storage -p $'data:\n "CSI_ENABLE_METADATA": "true"' configmap/rook-ceph-operator-config patched
等待对应的 CSI CephFS 插件置备程序 pod 和 CSI RBD 插件 pod 变为
Running
状态。注意确保启用元数据功能后会自动设置
setmetadata
变量。当禁用元数据功能时,此变量不应该可用。$ oc get pods | grep csi csi-cephfsplugin-b8d6c 2/2 Running 0 56m csi-cephfsplugin-bnbg9 2/2 Running 0 56m csi-cephfsplugin-kqdw4 2/2 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-q6dxb 5/5 Running 0 56m csi-cephfsplugin-provisioner-7dcd78bb9b-zc4q5 5/5 Running 0 56m csi-rbdplugin-776dl 3/3 Running 0 56m csi-rbdplugin-ffl52 3/3 Running 0 56m csi-rbdplugin-jx9mz 3/3 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-694fx 6/6 Running 0 56m csi-rbdplugin-provisioner-5f6d766b6c-vzv45 6/6 Running 0 56m
验证步骤
验证 RBD PVC 的元数据:
创建 PVC。
$ cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd EOF
检查 PVC 的状态。
$ oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 32s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
[sh-4.x]$ rbd ls ocs-storagecluster-cephblockpool csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]$ rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012
此镜像中有四个元数据:
Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 csi.storage.k8s.io/pvc/name rbd-pvc csi.storage.k8s.io/pvc/namespace openshift-storage
验证 RBD 克隆的元数据:
创建克隆。
$ cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-clone spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF
检查克隆的状态。
$ oc get pvc | grep rbd-pvc rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 15m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 52s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
[sh-4.x]$ rbd ls ocs-storagecluster-cephblockpool csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]$ rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-063b982d-2845-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 csi.storage.k8s.io/pvc/name rbd-pvc-clone csi.storage.k8s.io/pvc/namespace openshift-storage
验证 RBD 快照的元数据:
创建快照。
$ cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-rbdplugin-snapclass source: persistentVolumeClaimName: rbd-pvc EOF volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created
检查快照的状态。
$ oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true rbd-pvc 1Gi ocs-storagecluster-rbdplugin-snapclass snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f 17s 18s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
[sh-4.x]$ rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]$ rbd image-meta ls ocs-storagecluster-cephblockpool/csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/volumesnapshot/name rbd-pvc-snapshot csi.storage.k8s.io/volumesnapshot/namespace openshift-storage csi.storage.k8s.io/volumesnapshotcontent/name snapcontent-b992b782-7174-4101-8fe3-e6e478eb2c8f
验证 RBD 恢复的元数据:
恢复卷快照。
$ cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: ocs-storagecluster-ceph-rbd dataSource: name: rbd-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF persistentvolumeclaim/rbd-pvc-restore created
检查恢复的卷快照的状态。
$ oc get pvc | grep rbd db-noobaa-db-pg-0 Bound pvc-615e2027-78cd-4ea2-a341-fdedd50c5208 50Gi RWO ocs-storagecluster-ceph-rbd 51m rbd-pvc Bound pvc-30628fa8-2966-499c-832d-a6a3a8ebc594 1Gi RWO ocs-storagecluster-ceph-rbd 47m rbd-pvc-clone Bound pvc-0d72afda-f433-4d46-a7f1-a5fcb3d766e0 1Gi RWO ocs-storagecluster-ceph-rbd 32m rbd-pvc-restore Bound pvc-f900e19b-3924-485c-bb47-01b84c559034 1Gi RWO ocs-storagecluster-ceph-rbd 111s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
[sh-4.x]$ rbd ls ocs-storagecluster-cephblockpool csi-snap-a1e24408-2848-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012 csi-vol-063b982d-2845-11ed-94bd-0a580a830012-temp csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 csi-vol-7d67bfad-2842-11ed-94bd-0a580a830012 csi-vol-ed5ce27b-2842-11ed-94bd-0a580a830012 [sh-4.x]$ rbd image-meta ls ocs-storagecluster-cephblockpool/csi-vol-5f6e0737-2849-11ed-94bd-0a580a830012 There are 4 metadata on this image: Key Value csi.ceph.com/cluster/name 6cd7a18d-7363-4830-ad5c-f7b96927f026 csi.storage.k8s.io/pv/name pvc-f900e19b-3924-485c-bb47-01b84c559034 csi.storage.k8s.io/pvc/name rbd-pvc-restore csi.storage.k8s.io/pvc/namespace openshift-storage
验证 CephFS PVC 的元数据:
创建 PVC。
cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-cephfs EOF
检查 PVC 的状态。
oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 11s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
$ ceph fs volume ls [ { "name": "ocs-storagecluster-cephfilesystem" } ] $ ceph fs subvolumegroup ls ocs-storagecluster-cephfilesystem [ { "name": "csi" } ] $ ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { "name": "csi-vol-25266061-284c-11ed-95e0-0a580a810215" } ] $ ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name=csi --format=json { "csi.ceph.com/cluster/name": "6cd7a18d-7363-4830-ad5c-f7b96927f026", "csi.storage.k8s.io/pv/name": "pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9", "csi.storage.k8s.io/pvc/name": "cephfs-pvc", "csi.storage.k8s.io/pvc/namespace": "openshift-storage" }
验证 CephFS 克隆的元数据:
创建克隆。
$ cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-clone spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc kind: PersistentVolumeClaim accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-clone created
检查克隆的状态。
$ oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 9m5s cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
[rook@rook-ceph-tools-c99fd8dfc-6sdbg /]$ ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { "name": "csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215" }, { "name": "csi-vol-25266061-284c-11ed-95e0-0a580a810215" } ] [rook@rook-ceph-tools-c99fd8dfc-6sdbg /]$ ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215 --group_name=csi --format=json { "csi.ceph.com/cluster/name": "6cd7a18d-7363-4830-ad5c-f7b96927f026", "csi.storage.k8s.io/pv/name": "pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce", "csi.storage.k8s.io/pvc/name": "cephfs-pvc-clone", "csi.storage.k8s.io/pvc/namespace": "openshift-storage" }
验证 CephFS 卷快照的元数据:
创建卷快照。
$ cat <<EOF | oc create -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: cephfs-pvc-snapshot spec: volumeSnapshotClassName: ocs-storagecluster-cephfsplugin-snapclass source: persistentVolumeClaimName: cephfs-pvc EOF volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created
检查卷快照的状态。
$ oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-storagecluster-cephfsplugin-snapclass snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0 9s 9s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
$ ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 --group_name csi [ { "name": "csi-snap-06336f4e-284e-11ed-95e0-0a580a810215" } ] $ ceph fs subvolume snapshot metadata ls ocs-storagecluster-cephfilesystem csi-vol-25266061-284c-11ed-95e0-0a580a810215 csi-snap-06336f4e-284e-11ed-95e0-0a580a810215 --group_name=csi --format=json { "csi.ceph.com/cluster/name": "6cd7a18d-7363-4830-ad5c-f7b96927f026", "csi.storage.k8s.io/volumesnapshot/name": "cephfs-pvc-snapshot", "csi.storage.k8s.io/volumesnapshot/namespace": "openshift-storage", "csi.storage.k8s.io/volumesnapshotcontent/name": "snapcontent-f0f17463-d13b-4e13-b44e-6340bbb3bee0" }
验证 CephFS 恢复的元数据:
恢复卷快照。
$ cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: ocs-storagecluster-cephfs dataSource: name: cephfs-pvc-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF persistentvolumeclaim/cephfs-pvc-restore created
检查恢复的卷快照的状态。
$ oc get pvc | grep cephfs cephfs-pvc Bound pvc-4151128c-86f0-468b-b6e7-5fdfb51ba1b9 1Gi RWO ocs-storagecluster-cephfs 29m cephfs-pvc-clone Bound pvc-3d4c4e78-f7d5-456a-aa6e-4da4a05ca4ce 1Gi RWX ocs-storagecluster-cephfs 20m cephfs-pvc-restore Bound pvc-43d55ea1-95c0-42c8-8616-4ee70b504445 1Gi RWX ocs-storagecluster-cephfs 21s
验证 Red Hat Ceph Storage 命令行界面(CLI)中的元数据。
有关如何访问 Red Hat Ceph Storage CLI 的详情,请参考 如何在 Red Hat OpenShift Data Foundation 环境中访问 Red Hat Ceph Storage CLI。
$ ceph fs subvolume ls ocs-storagecluster-cephfilesystem --group_name csi [ { "name": "csi-vol-3536db13-2850-11ed-95e0-0a580a810215" }, { "name": "csi-vol-5ea23eb0-284d-11ed-95e0-0a580a810215" }, { "name": "csi-vol-25266061-284c-11ed-95e0-0a580a810215" } ] $ ceph fs subvolume metadata ls ocs-storagecluster-cephfilesystem csi-vol-3536db13-2850-11ed-95e0-0a580a810215 --group_name=csi --format=json { "csi.ceph.com/cluster/name": "6cd7a18d-7363-4830-ad5c-f7b96927f026", "csi.storage.k8s.io/pv/name": "pvc-43d55ea1-95c0-42c8-8616-4ee70b504445", "csi.storage.k8s.io/pvc/name": "cephfs-pvc-restore", "csi.storage.k8s.io/pvc/namespace": "openshift-storage" }