第 2 章 在 OpenShift 环境中进行 Red Hat Gluster Storage Pod 操作
本章列出可以在红帽 Gluster 存储 pod(gluster pod)上执行的各种操作:
要列出 pod,请执行以下命令:
# oc get pods -n <storage_project_name>例如:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE storage-project-router-1-v89qc 1/1 Running 0 1d glusterfs-dc-node1.example.com 1/1 Running 0 1d glusterfs-dc-node2.example.com 1/1 Running 1 1d glusterfs-dc-node3.example.com 1/1 Running 0 1d heketi-1-k1u14 1/1 Running 0 23m以下是上例中的 gluster pod:
glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com注意topology.json 文件将提供给定受信存储池(TSP)中的节点详情。在上例中,所有三个红帽 Gluster 存储节点都来自相同的 TSP。
要进入 gluster pod shell,请执行以下命令:
# oc rsh <gluster_pod_name> -n <storage_project_name>例如:
# oc rsh glusterfs-dc-node1.example.com -n storage-project sh-4.2#要获得对等状态,请执行以下命令:
# gluster peer status例如:
# gluster peer status Number of Peers: 2 Hostname: node2.example.com Uuid: 9f3f84d2-ef8e-4d6e-aa2c-5e0370a99620 State: Peer in Cluster (Connected) Other names: node1.example.com Hostname: node3.example.com Uuid: 38621acd-eb76-4bd8-8162-9c2374affbbd State: Peer in Cluster (Connected)要列出受信存储池上的 gluster 卷,请执行以下命令:
# gluster volume info例如:
Volume Name: heketidbstorage Type: Distributed-Replicate Volume ID: 2fa53b28-121d-4842-9d2f-dce1b0458fda Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 192.168.121.172:/var/lib/heketi/mounts/vg_1be433737b71419dc9b395e221255fb3/brick_c67fb97f74649d990c5743090e0c9176/brick Brick2: 192.168.121.233:/var/lib/heketi/mounts/vg_0013ee200cdefaeb6dfedd28e50fd261/brick_6ebf1ee62a8e9e7a0f88e4551d4b2386/brick Brick3: 192.168.121.168:/var/lib/heketi/mounts/vg_e4b32535c55c88f9190da7b7efd1fcab/brick_df5db97aa002d572a0fec6bcf2101aad/brick Brick4: 192.168.121.233:/var/lib/heketi/mounts/vg_0013ee200cdefaeb6dfedd28e50fd261/brick_acc82e56236df912e9a1948f594415a7/brick Brick5: 192.168.121.168:/var/lib/heketi/mounts/vg_e4b32535c55c88f9190da7b7efd1fcab/brick_65dceb1f749ec417533ddeae9535e8be/brick Brick6: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_f258450fc6f025f99952a6edea203859/brick Options Reconfigured: performance.readdir-ahead: on Volume Name: vol_9e86c0493f6b1be648c9deee1dc226a6 Type: Distributed-Replicate Volume ID: 940177c3-d866-4e5e-9aa0-fc9be94fc0f4 Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 192.168.121.168:/var/lib/heketi/mounts/vg_3fa141bf2d09d30b899f2f260c494376/brick_9fb4a5206bdd8ac70170d00f304f99a5/brick Brick2: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_dae2422d518915241f74fd90b426a379/brick Brick3: 192.168.121.233:/var/lib/heketi/mounts/vg_5c6428c439eb6686c5e4cee56532bacf/brick_b3768ba8e80863724c9ec42446ea4812/brick Brick4: 192.168.121.172:/var/lib/heketi/mounts/vg_7ad961dbd24e16d62cabe10fd8bf8909/brick_0a13958525c6343c4a7951acec199da0/brick Brick5: 192.168.121.168:/var/lib/heketi/mounts/vg_17fbc98d84df86756e7826326fb33aa4/brick_af42af87ad87ab4f01e8ca153abbbee9/brick Brick6: 192.168.121.233:/var/lib/heketi/mounts/vg_5c6428c439eb6686c5e4cee56532bacf/brick_ef41e04ca648efaf04178e64d25dbdcb/brick Options Reconfigured: performance.readdir-ahead: on要获得卷状态,请执行以下命令:
# gluster volume status <volname>例如:
# gluster volume status vol_9e86c0493f6b1be648c9deee1dc226a6 Status of volume: vol_9e86c0493f6b1be648c9deee1dc226a6 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.121.168:/var/lib/heketi/mounts/v g_3fa141bf2d09d30b899f2f260c494376/brick_9f b4a5206bdd8ac70170d00f304f99a5/brick 49154 0 Y 3462 Brick 192.168.121.172:/var/lib/heketi/mounts/v g_7ad961dbd24e16d62cabe10fd8bf8909/brick_da e2422d518915241f74fd90b426a379/brick 49154 0 Y 115939 Brick 192.168.121.233:/var/lib/heketi/mounts/v g_5c6428c439eb6686c5e4cee56532bacf/brick_b3 768ba8e80863724c9ec42446ea4812/brick 49154 0 Y 116134 Brick 192.168.121.172:/var/lib/heketi/mounts/v g_7ad961dbd24e16d62cabe10fd8bf8909/brick_0a 13958525c6343c4a7951acec199da0/brick 49155 0 Y 115958 Brick 192.168.121.168:/var/lib/heketi/mounts/v g_17fbc98d84df86756e7826326fb33aa4/brick_af 42af87ad87ab4f01e8ca153abbbee9/brick 49155 0 Y 3481 Brick 192.168.121.233:/var/lib/heketi/mounts/v g_5c6428c439eb6686c5e4cee56532bacf/brick_ef 41e04ca648efaf04178e64d25dbdcb/brick 49155 0 Y 116153 NFS Server on localhost 2049 0 Y 116173 Self-heal Daemon on localhost N/A N/A Y 116181 NFS Server on node1.example.com 2049 0 Y 3501 Self-heal Daemon on node1.example.com N/A N/A Y 3509 NFS Server on 192.168.121.172 2049 0 Y 115978 Self-heal Daemon on 192.168.121.172 N/A N/A Y 115986 Task Status of Volume vol_9e86c0493f6b1be648c9deee1dc226a6 ------------------------------------------------------------------------------ There are no active volume tasks要使用快照功能,请在其中一个节点上使用以下命令载入快照模块:
# modprobe dm_snapshot重要使用快照的限制
- 创建快照后,必须仅通过用户可用的快照功能进行访问。这可用于将文件复制到所需位置。
- 不支持将卷恢复到快照状态,因此不应执行,因为它可能会损坏数据的一致性。
- 在带有快照的卷上,不得执行卷更改操作(如卷扩展)。
- 无法对基于 gluster-block 的 PV 进行一致的快照。
要对 gluster 卷进行快照,请执行以下命令:
# gluster snapshot create <snapname> <volname>例如:
# gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6 snapshot create: success: Snap snap1_GMT-2016.07.29-13.05.46 created successfully要列出快照,请执行以下命令:
# gluster snapshot list例如:
# gluster snapshot list snap1_GMT-2016.07.29-13.05.46 snap2_GMT-2016.07.29-13.06.13 snap3_GMT-2016.07.29-13.06.18 snap4_GMT-2016.07.29-13.06.22 snap5_GMT-2016.07.29-13.06.26要删除快照,请执行以下命令:
# gluster snap delete <snapname>例如:
# gluster snap delete snap1_GMT-2016.07.29-13.05.46 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1_GMT-2016.07.29-13.05.46: snap removed successfully有关管理快照的更多信息,请参阅 https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#chap-Managing_Snapshots。
您可以将 Red Hat Openshift Container Storage 卷设置为非 Red Hat Openshift Container Storage 远程网站。异地复制使用主从模式。在这里,Red Hat Openshift Container Storage 卷充当 master 卷。要设置异地复制,您必须在 gluster pod 上运行 geo-replication 命令。要进入 gluster pod shell,请执行以下命令:
# oc rsh <gluster_pod_name> -n <storage_project_name>有关设置 geo-replication 的更多信息,请参阅 https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_geo-replication。
Brick 多路复用是一种功能,允许将多个 brick 纳入一个进程中。这减少了资源消耗,使您可以运行比以前相同的内存消耗数量更多的 brick。
从 Container-Native Storage 3.6 中默认启用 brick 多路复用。如果要关闭它,请执行以下命令:
# gluster volume set all cluster.brick-multiplex off启用 glusterfs libfuse 中的
auto_unmount选项确保在 FUSE 服务器终止时卸载该文件系统,方法是运行执行卸载的独立 monitor 进程。Openshift 中的 GlusterFS 插件为 gluster 挂载启用
auto_unmount选项。
2.1. 对节点进行维护 复制链接链接已复制到粘贴板!
2.1.1. 在维护前需要遵循的步骤 复制链接链接已复制到粘贴板!
删除 glusterfs 的标签 glusterfs 或等同标签,这是
glusterfs daemonset的选择器。等待 pod 终止。运行以下命令来获取
节点选择器。# oc get ds例如:
# oc get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE glusterfs-storage 3 3 3 3 3 NODE SELECTOR AGE glusterfs=storage-host 12d使用以下命令删除 glusterfs 标签。
# oc label node <storge_node1> glusterfs-例如:
# oc label node <storge_node1> glusterfs- node/<storage_node1> labeled等待 glusterfs 容器集被终止。使用以下命令进行验证。
# oc get pods -l glusterfs例如:
# oc get pods -l glusterfs NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 7m glusterfs-storage-4tc9c 1/1 Terminating 0 5m glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d
使用以下命令使节点不可调度。
# oc adm manage-node --schedulable=false <storage_node1>例如:
# oc adm manage-node --schedulable=false <storage_node1> NAME STATUS ROLES AGE VERSION storage_node1 Ready,SchedulingDisabled compute 12d v1.11.0+d4cacc0使用以下命令排空节点。
# oc adm drain --ignore-daemonsets <storage_node1>注意执行维护和重启(如果需要)
2.1.2. 维护后要遵循的必要步骤 复制链接链接已复制到粘贴板!
使用以下命令使节点可以调度。
# oc adm manage-node --schedulable=true <storage_node1>例如:
# oc adm manage-node --schedulable=true <storage_node1> NAME STATUS ROLES AGE VERSION node1 Ready compute 12d v1.11.0+d4cacc0添加标签 glusterfs 或同等标签,这是
glusterfs daemonset的选择器。等待 Pod 准备就绪。运行以下命令来获取
节点选择器。# oc get ds例如:
# oc get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE glusterfs-storage 3 3 3 3 3 NODE SELECTOR AGE glusterfs=storage-host 12d使用以上节点选择器和以下命令标记 glusterfs 节点。
# oc label node <storage_node1> glusterfs=storage-host例如:
# oc label node <storage_node1> glusterfs=storage-host node/<storage_node1> labeled等待 Pod 变为 Ready 状态。
# oc get pods例如:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 3m glusterfs-storage-4tc9c 0/1 Running 0 50s glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d等待 Pod 处于 1/1 Ready 状态。
例如:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner 1/1 Running 0 3m glusterfs-storage-4tc9c 1/1 Running 0 58s glusterfs-storage-htrfg 1/1 Running 0 1d glusterfs-storage-z75bc 1/1 Running 0 1d heketi-storage-1-shgrr 1/1 Running 0 1d
等待修复完成,使用 oc rsh 获取 glusterfs pod 的 shell 并使用下列命令监控修复,并等待 条目数量 为零(0)。
# for each_volume in gluster volume list; do gluster volume heal $each_volume info ; done例如:
# for each_volume in gluster volume list; do gluster volume heal $each_volume info ; done Brick 10.70.46.210:/var/lib/heketi/mounts/vg_64e90b4b94174f19802a8026f652f6d7/brick_564f7725cef192f0fd2ba1422ecbf590/brick Status: Connected Number of entries: 0 Brick 10.70.46.243:/var/lib/heketi/mounts/vg_4fadbf84bbc67873543472655e9660ec/brick_9c9c8c64c48d24c91948bc810219c945/brick Status: Connected Number of entries: 0 Brick 10.70.46.224:/var/lib/heketi/mounts/vg_9fbaf0c06495e66f5087a51ad64e54c3/brick_75e40df81383a03b1778399dc342e794/brick Status: Connected Number of entries: 0 Brick 10.70.46.224:/var/lib/heketi/mounts/vg_9fbaf0c06495e66f5087a51ad64e54c3/brick_e0058f65155769142cec81798962b9a7/brick Status: Connected Number of entries: 0 Brick 10.70.46.210:/var/lib/heketi/mounts/vg_64e90b4b94174f19802a8026f652f6d7/brick_3cf035275dc93e0437fdfaea509a3a44/brick Status: Connected Number of entries: 0 Brick 10.70.46.243:/var/lib/heketi/mounts/vg_4fadbf84bbc67873543472655e9660ec/brick_2cfd11ce587e622fe800dfaec101e463/brick Status: Connected Number of entries: 0