4.9. 使用块存储服务
要采用 director 部署的块存储服务(cinder),请基于现有的 cinder.conf 文件创建清单,部署块存储服务,并验证新部署。
先决条件
- 您已查看了块存储服务限制。如需更多信息,请参阅 使用块存储服务的限制。
- 您已计划放置块存储服务。
- 您已准备了运行卷和备份服务的 Red Hat OpenShift Container Platform (RHOCP)节点。如需更多信息,请参阅 使用 块存储服务的 RHOCP 准备。
- Block Storage 服务(cinder)已停止。
- 服务数据库导入到 control plane MariaDB 中。
- 采用 Identity 服务(keystone)和密钥管理器服务(barbican)。
- RHOCP 集群上正确配置了存储网络。
cinder.conf文件的内容。下载该文件以便您可以在本地访问该文件:$CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf > cinder.conf
流程
创建新文件,如
cinder_api.patch,并应用配置:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>将
<patch_name> 替换为补丁文件的名称。以下示例显示了
cinder_api.patch文件:spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: cinder secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.801 spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 0 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: ceph: networkAttachments: - storage replicas: 0- 1
- 如果使用 IPv6,请将负载均衡器 IP 更改为环境中的负载均衡器 IP,如
metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80。
检索之前调度程序和备份服务的列表:
$ openstack volume service list +------------------+------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-backup | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | down | 2024-11-04T17:47:14.000000 | +------------------+------------------------+------+---------+-------+----------------------------+删除处于
down状态的主机的服务:$ oc exec -t cinder-api-0 -c cinder-api -- cinder-manage service remove <service_binary> <service_host>-
将
<service_binary> 替换为二进制名称,如cinder-backup。 -
将
<service_host> 替换为主机名,如cinder-backup-0。
-
将
部署调度程序、备份和卷服务:
创建另一个文件,如
cinder_services.patch,并应用配置:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>-
将
<patch_name> 替换为补丁文件的名称。 以下示例显示了 Ceph RBD 部署的
cinder_services.patch文件:spec: cinder: enabled: true template: cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=openstack backup_ceph_pool=backups cinderVolumes: ceph: networkAttachments: - storage replicas: 1 customServiceConfig: | [tripleo_ceph] backend_host=hostgroup volume_backend_name=tripleo_ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False report_discard_supported=True注意确保源集群中使用的驱动程序使用相同的配置组名称。在本例中,
customServiceConfig中的驱动程序配置组名为tripleo_ceph,因为它反映源 OpenStack 集群的cinder.conf文件中的配置组名称的值。
配置 NetApp NFS 块存储卷服务:
创建 secret,使其包含用于访问第三方 NetApp NFS 存储的敏感信息,如主机名、密码和用户名。您可以在从 director 部署生成的
cinder.conf文件中找到凭证。$ oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-volume name: cinder-volume-ontap-secrets type: Opaque stringData: ontap-cinder-secrets: | [tripleo_netapp] netapp_login= netapp_username netapp_password= netapp_password netapp_vserver= netapp_vserver nas_host= netapp_nfsip nas_share_path=/netapp_nfspath netapp_pool_name_search_pattern=(netapp_poolpattern) EOF对
OpenStackControlPlaneCR 进行补丁,以部署 NetApp NFS Block Storage 卷后端:$ oc patch openstackcontrolplane openstack --type=merge --patch-file=<cinder_netappNFS.patch>以下示例显示了配置 NetApp NFS Block Storage 卷服务的
cinder_netappNFS.patch文件:spec: cinder: enabled: true template: cinderVolumes: ontap-nfs: networkAttachments: - storage customServiceConfig: | [tripleo_netapp] volume_backend_name=ontap-nfs volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false netapp_server_hostname= netapp_backendip netapp_server_port=80 netapp_storage_protocol=nfs netapp_storage_family=ontap_cluster customServiceConfigSecrets: - cinder-volume-ontap-secrets
检查所有服务是否已启动并正在运行:
$ openstack volume service list +------------------+--------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------------------+------+---------+-------+----------------------------+ | cinder-volume | hostgroup@tripleo_netapp | nova | enabled | up | 2023-06-28T17:00:03.000000 | | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2023-06-28T17:00:02.000000 | | cinder-backup | cinder-backup-0 | nova | enabled | up | 2023-06-28T17:00:01.000000 | +------------------+--------------------------+------+---------+-------+----------------------------+应用 DB 数据迁移:
注意您不需要在此步骤中运行数据迁移,但您必须在下一次升级前运行它们。但是,对于采用,您可以现在运行迁移以确保在部署上运行生产工作负载前没有问题。
$ oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations
验证
确保定义了
openstack别名:$ alias openstack="oc exec -t openstackclient -- openstack"确认定义了块存储服务端点并指向 control plane FQDN:
$ openstack endpoint list --service <endpoint>-
将
<endpoint> 替换为您要确认的端点的名称。
-
将
确认块存储服务正在运行:
$ openstack volume service list注意Cinder API 服务不会出现在列表中。但是,如果您从
openstack volume service list命令获得响应,这表示至少有一个 cinder API 服务正在运行。确认已有以前的卷类型、卷、快照和备份:
$ openstack volume type list $ openstack volume list $ openstack volume snapshot list $ openstack volume backup list要确认配置是否正常工作,请执行以下步骤:
从镜像创建卷,以检查与镜像服务(glance)的连接是否正常工作:
$ openstack volume create --image cirros --bootable --size 1 disk_new备份之前附加的卷:
$ openstack --os-volume-api-version 3.47 volume create --backup <backup_name>将
<backup_name> 替换为新备份位置的名称。注意您不能使用新卷从镜像启动计算服务(nova)实例,或者尝试分离上一个卷,因为计算服务和块存储服务仍然未连接。