OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
5.3. 以独立模式部署 Red Hat Openshift Container Storage
在清单文件中,在
[OSEv3:children]
部分添加glusterfs
来启用[glusterfs]
组:[OSEv3:children] masters etcd nodes glusterfs
[OSEv3:children] masters etcd nodes glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 在
[OSEv3:vars]
部分包含以下变量,根据您的配置需要调整它们:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意openshift_storage_glusterfs_block_host_vol_size
使用一个整数,即 Gi 中卷的大小。添加
[glusterfs]
部分,其中包含托管 GlusterFS 存储的每个存储节点的条目。对于每个节点,将glusterfs_devices
设置为作为 GlusterFS 集群一部分完全管理的原始块设备列表。它必须至少列出一个设备。每个设备都必须是裸机的,且无分区或 LVM PV。另外,将glusterfs_ip
设置为 node.Specify 的 IP 地址,使用以下格式:<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_zone=<zone_number> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 例如:
[glusterfs] gluster1.example.com glusterfs_zone=1 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_zone=2 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_zone=3 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[glusterfs] gluster1.example.com glusterfs_zone=1 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_zone=2 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_zone=3 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 前面的步骤详细说明了需要添加到一个更大的、完整的清单文件。要使用完整的清单文件来部署 {gluster},请将文件路径作为以下 playbook 的选项:
对于初始 OpenShift Container Platform 安装:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 对于在现有 OpenShift Container Platform 集群上进行独立安装:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Brick 多路是一个功能,允许在一个进程中添加多个 brick。这可以减少资源消耗,并允许我们运行超过相同内存消耗的 brick 数。在每个集群上的其中一个 Red Hat Gluster Storage 节点上执行以下命令来启用 brick-multiplexing:
执行以下命令以启用 brick 多路功能:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 例如:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads(Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 重启 heketidb 卷:
gluster vol stop heketidbstorage
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster vol start heketidbstorage
# gluster vol start heketidbstorage volume start: heketidbstorage: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow