1.10. 使用 cephadm将存储集群转换为
将存储集群升级到 Red Hat Ceph Storage 5 后,运行 cephadm-adopt
playbook 以转换存储集群守护进程以运行 cephadm
。
cephadm-adopt
playbook 采用 Ceph 服务,安装所有 cephadm
依赖项,启用 cephadm
编配器后端,在所有主机上生成和配置 ssh
密钥,并将主机添加到编配器配置中。
运行 cephadm-adopt
playbook 后,删除 ceph-ansible
软件包。集群守护进程不再可用于 ceph-ansible
。您必须使用 cephadm
来管理集群守护进程。
先决条件
- 一个正在运行的 Red Hat Ceph Storage 集群。
- 对存储集群中所有节点的根级别访问权限。
流程
-
登录
ceph-ansible
节点,再将目录更改为/usr/share/ceph-ansible
。 编辑
all.yml
文件。语法
ceph_origin: custom/rhcs ceph_custom_repositories: - name: NAME state: present description: DESCRIPTION gpgcheck: 'no' baseurl: BASE_URL file: FILE_NAME priority: '2' enabled: 1
ceph_origin: custom/rhcs ceph_custom_repositories: - name: NAME state: present description: DESCRIPTION gpgcheck: 'no' baseurl: BASE_URL file: FILE_NAME priority: '2' enabled: 1
Copy to Clipboard Copied! 示例
ceph_origin: custom ceph_custom_repositories: - name: ceph_custom state: present description: Ceph custom repo gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild priority: '2' enabled: 1 - name: ceph_custom_1 state: present description: Ceph custom repo 1 gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild_1 priority: '2' enabled: 1
ceph_origin: custom ceph_custom_repositories: - name: ceph_custom state: present description: Ceph custom repo gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild priority: '2' enabled: 1 - name: ceph_custom_1 state: present description: Ceph custom repo 1 gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild_1 priority: '2' enabled: 1
Copy to Clipboard Copied! 运行
cephadm-adopt
playbook:语法
ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i INVENTORY_FILE
ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i INVENTORY_FILE
Copy to Clipboard Copied! 示例
ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i hosts
[ceph-admin@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i hosts
Copy to Clipboard Copied! 将最小 compat 客户端参数设置为
luminous
:示例
[ceph: root@node0 /]# ceph osd set-require-min-compat-client luminous
[ceph: root@node0 /]# ceph osd set-require-min-compat-client luminous
Copy to Clipboard Copied! 运行以下命令,使应用在 NFS-Ganesha 池上运行:POOL_NAME 为
nfs-ganesha
,APPLICATION_NAME 是您要启用的应用的名称,如cephfs
、rbd
或rgw
。语法
ceph osd pool application enable POOL_NAME APPLICATION_NAME
ceph osd pool application enable POOL_NAME APPLICATION_NAME
Copy to Clipboard Copied! 示例
[ceph: root@node0 /]# ceph osd pool application enable nfs-ganesha rgw
[ceph: root@node0 /]# ceph osd pool application enable nfs-ganesha rgw
Copy to Clipboard Copied! 重要在将存储集群从 Red Hat Ceph Storage 4 迁移到 Red Hat Ceph Storage 5 后,
cephadm-adopt
playbook 不会启动 rbd-mirror。要临时解决这个问题,请手动添加 peer:
语法
rbd mirror pool peer add POOL_NAME CLIENT_NAME@CLUSTER_NAME
rbd mirror pool peer add POOL_NAME CLIENT_NAME@CLUSTER_NAME
Copy to Clipboard Copied! 示例
[ceph: root@node0 /]# rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b
[ceph: root@node0 /]# rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b
Copy to Clipboard Copied! 升级后删除 Grafana:
登录到 Cephadm shell:
示例
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! 在存储集群中获取 Grafana 的名称:
示例
[ceph: root@host01 /]# ceph orch ps --daemon_type grafana
[ceph: root@host01 /]# ceph orch ps --daemon_type grafana
Copy to Clipboard Copied! 删除 Grafana:
语法
ceph orch daemon rm GRAFANA_DAEMON_NAME
ceph orch daemon rm GRAFANA_DAEMON_NAME
Copy to Clipboard Copied! 示例
[ceph: root@host01 /]# ceph orch daemon rm grafana.host01 Removed grafana.host01 from host 'host01'
[ceph: root@host01 /]# ceph orch daemon rm grafana.host01 Removed grafana.host01 from host 'host01'
Copy to Clipboard Copied! 等待几分钟并检查最新的日志:
示例
[ceph: root@host01 /]# ceph log last cephadm
[ceph: root@host01 /]# ceph log last cephadm
Copy to Clipboard Copied! cephadm
重新部署 Grafana 服务和守护进程。