7.4. 将 Red Hat Ceph Storage RGW 迁移到外部 RHEL 节点


对于超融合基础架构(HCI)或专用存储节点,您必须将 Red Hat OpenStack Platform Controller 节点中包含的 Ceph 对象网关(RGW)守护进程迁移到现有的外部 Red Hat Enterprise Linux (RHEL)节点。外部 RHEL 节点通常包括 HCI 环境或 Red Hat Ceph Storage 节点的 Compute 节点。您的环境必须具有 Red Hat Ceph Storage 7 或更高版本,并由 cephadm 或 Ceph Orchestrator 管理。

先决条件

7.4.1. 迁移 Red Hat Ceph Storage RGW 后端

您必须将 Ceph 对象网关(RGW)后端从 Controller 节点迁移到 Red Hat Ceph Storage 节点。为确保将正确数量的服务分发到可用的节点,您可以使用 cephadm 标签来引用部署了给定守护进程类型的一组节点。有关卡图的更多信息,请参阅 Red Hat Ceph Storage 守护进程卡。以下流程假设您有三个目标节点 cephstorage-0cephstorage-1cephstorage-2

流程

  1. 将 RGW 标签添加到您要将 RGW 后端迁移到的 Red Hat Ceph Storage 节点:

    $ sudo cephadm shell -- ceph orch host label add cephstorage-0 rgw;
    $ sudo cephadm shell -- ceph orch host label add cephstorage-1 rgw;
    $ sudo cephadm shell -- ceph orch host label add cephstorage-2 rgw;
    
    Added label rgw to host cephstorage-0
    Added label rgw to host cephstorage-1
    Added label rgw to host cephstorage-2
    
    $ sudo cephadm shell -- ceph orch host ls
    
    HOST       	ADDR       	LABELS      	STATUS
    cephstorage-0  192.168.24.54  osd rgw
    cephstorage-1  192.168.24.44  osd rgw
    cephstorage-2  192.168.24.30  osd rgw
    controller-0   192.168.24.45  _admin mon mgr
    controller-1   192.168.24.11  _admin mon mgr
    controller-2   192.168.24.38  _admin mon mgr
    
    6 hosts in cluster
    Copy to Clipboard Toggle word wrap
  2. 在 spec 目录中找到 RGW spec 和 dump :

    $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
    $ mkdir -p ${SPEC_DIR}
    $ sudo cephadm shell -- ceph orch ls --export rgw > ${SPEC_DIR}/rgw
    $ cat ${SPEC_DIR}/rgw
    
    networks:
    - 172.17.3.0/24
    placement:
      hosts:
      - controller-0
      - controller-1
      - controller-2
    service_id: rgw
    service_name: rgw.rgw
    service_type: rgw
    spec:
      rgw_frontend_port: 8080
      rgw_realm: default
      rgw_zone: default
    Copy to Clipboard Toggle word wrap

    本例假定 172.17.3.0/24存储网络

  3. placement 部分中,确保设置了 标签和 rgw_frontend_port 值:

    ---
    networks:
    - 172.17.3.0/24
    1
    
    placement:
      label: rgw 
    2
    
    service_id: rgw
    service_name: rgw.rgw
    service_type: rgw
    spec:
      rgw_frontend_port: 8090 
    3
    
      rgw_realm: default
      rgw_zone: default
      rgw_frontend_ssl_certificate: ... 
    4
    
      ssl: true
    Copy to Clipboard Toggle word wrap
    1
    添加部署 RGW 后端的存储网络。
    2
    将 Controller 节点替换为 label: rgw 标签。
    3
    rgw_frontend_port 值更改为 8090,以避免与 Ceph ingress 守护进程冲突。
    4
    可选:如果启用了 TLS,请添加 SSL 证书和密钥串联,如 配置持久性存储 中的 为外部 Red Hat Ceph Storage 集群配置 RGW 所述。
  4. 使用编配器 CLI 应用新的 RGW 规格:

    $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
    $ sudo cephadm shell -m ${SPEC_DIR}/rgw -- ceph orch apply -i /mnt/rgw
    Copy to Clipboard Toggle word wrap

    此命令触发重新部署,例如:

    ...
    osd.9                     	cephstorage-2
    rgw.rgw.cephstorage-0.wsjlgx  cephstorage-0  172.17.3.23:8090   starting
    rgw.rgw.cephstorage-1.qynkan  cephstorage-1  172.17.3.26:8090   starting
    rgw.rgw.cephstorage-2.krycit  cephstorage-2  172.17.3.81:8090   starting
    rgw.rgw.controller-1.eyvrzw   controller-1   172.17.3.146:8080  running (5h)
    rgw.rgw.controller-2.navbxa   controller-2   172.17.3.66:8080   running (5h)
    
    ...
    osd.9                     	cephstorage-2
    rgw.rgw.cephstorage-0.wsjlgx  cephstorage-0  172.17.3.23:8090  running (19s)
    rgw.rgw.cephstorage-1.qynkan  cephstorage-1  172.17.3.26:8090  running (16s)
    rgw.rgw.cephstorage-2.krycit  cephstorage-2  172.17.3.81:8090  running (13s)
    Copy to Clipboard Toggle word wrap
  5. 确保新的 RGW 后端可以在新端口上访问,以便稍后在端口 8080 上启用入口守护进程。登录到包含 RGW 的每个 Red Hat Ceph Storage 节点,并添加 iptables 规则,以允许连接到 Red Hat Ceph Storage 节点上的 8080 和 8090 端口:

    $ iptables -I INPUT -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -m comment --comment "ceph rgw ingress" -j ACCEPT
    $ iptables -I INPUT -p tcp -m tcp --dport 8090 -m conntrack --ctstate NEW -m comment --comment "ceph rgw backends" -j ACCEPT
    $ sudo iptables-save
    $ sudo systemctl restart iptables
    Copy to Clipboard Toggle word wrap
  6. 如果在现有部署中使用了 nftables,请编辑 /etc/nftables/tripleo-rules.nft 并添加以下内容:

    # 100 ceph_rgw {'dport': ['8080','8090']}
    add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8090 } ct state new counter accept comment "100 ceph_rgw"
    Copy to Clipboard Toggle word wrap
  7. 保存该文件。
  8. 重启 nftables 服务:

    $ sudo systemctl restart nftables
    Copy to Clipboard Toggle word wrap
  9. 验证是否应用了规则:

    $ sudo nft list ruleset | grep ceph_rgw
    Copy to Clipboard Toggle word wrap
  10. 从 Controller 节点(如 controller-0 )尝试访问 RGW 后端:

    $ curl http://cephstorage-0.storage:8090;
    Copy to Clipboard Toggle word wrap

    您应该观察以下输出:

    <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
    Copy to Clipboard Toggle word wrap

    对部署 RGW 守护进程的每个节点重复验证。

  11. 如果您将 RGW 后端迁移到 Red Hat Ceph Storage 节点,则没有 internalAPI 网络,除了 HCI 节点的情况。您必须重新配置 RGW keystone 端点,以指向您传播的外部网络:

    [ceph: root@controller-0 /]# ceph config dump | grep keystone
    global   basic rgw_keystone_url  http://172.16.1.111:5000
    
    [ceph: root@controller-0 /]# ceph config set global rgw_keystone_url http://<keystone_endpoint>:5000
    Copy to Clipboard Toggle word wrap
    • 在采用 Identity 服务时,将 < keystone_endpoint > 替换为在 OpenStackControlPlane CR 中部署的服务的 Identity 服务(keystone)内部端点。如需更多信息,请参阅 使用 Identity 服务

7.4.2. 部署 Red Hat Ceph Storage ingress 守护进程

要部署 Ceph ingress 守护进程,您可以执行以下操作:

  1. 移除现有的 ceph_rgw 配置。
  2. 清理 director 创建的配置。
  3. 重新部署 Object Storage 服务(swift)。

部署 ingress 守护进程时,会创建两个新容器:

  • HAProxy,用于访问后端。
  • keepalived,用于拥有虚拟 IP 地址。

您可以使用 rgw 标签将入口守护进程分发到托管 Ceph 对象网关(RGW)守护进程的节点数量。有关在节点上分发守护进程的更多信息,请参阅 Red Hat Ceph Storage 守护进程卡

完成此步骤后,您可以从 ingress 守护进程访问 RGW 后端,并通过对象存储服务 CLI 使用 RGW。

流程

  1. 登录到每个 Controller 节点,并从 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg 文件中删除以下配置:

    listen ceph_rgw
      bind 10.0.0.103:8080 transparent
      mode http
      balance leastconn
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      option httpchk GET /swift/healthcheck
      option httplog
      option forwardfor
       server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2
      server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2
      server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2
    Copy to Clipboard Toggle word wrap
  2. 重启 haproxy-bundle 并确认它已启动:

    [root@controller-0 ~]# sudo pcs resource restart haproxy-bundle
    haproxy-bundle successfully restarted
    
    
    [root@controller-0 ~]# sudo pcs status | grep haproxy
    
      * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]:
        * haproxy-bundle-podman-0   (ocf:heartbeat:podman):  Started controller-0
        * haproxy-bundle-podman-1   (ocf:heartbeat:podman):  Started controller-1
        * haproxy-bundle-podman-2   (ocf:heartbeat:podman):  Started controller-2
    Copy to Clipboard Toggle word wrap
  3. 确认没有进程连接到端口 8080:

    [root@controller-0 ~]# ss -antop | grep 8080
    [root@controller-0 ~]#
    Copy to Clipboard Toggle word wrap

    您可以预期 Object Storage 服务(swift) CLI 无法建立连接:

    (overcloud) [root@cephstorage-0 ~]# swift list
    
    HTTPConnectionPool(host='10.0.0.103', port=8080): Max retries exceeded with url: /swift/v1/AUTH_852f24425bb54fa896476af48cbe35d3?format=json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc41beb0430>: Failed to establish a new connection: [Errno 111] Connection refused'))
    Copy to Clipboard Toggle word wrap
  4. 为 HAProxy 和 Keepalived 设置所需的镜像:

    [ceph: root@controller-0 /]# ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest
    [ceph: root@controller-0 /]# ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest
    Copy to Clipboard Toggle word wrap
  5. controller-0 中创建一个名为 rgw_ingress 的文件:

    $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
    $ vim ${SPEC_DIR}/rgw_ingress
    Copy to Clipboard Toggle word wrap
  6. 将以下内容粘贴到 rgw_ingress 文件中:

    ---
    service_type: ingress
    service_id: rgw.rgw
    placement:
      label: rgw
    spec:
      backend_service: rgw.rgw
      virtual_ip: 10.0.0.89/24
      frontend_port: 8080
      monitor_port: 8898
      virtual_interface_networks:
        - <external_network>
      ssl_cert: ...
    Copy to Clipboard Toggle word wrap
  7. 使用 Ceph 编配器 CLI 应用 rgw_ingress spec:

    $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
    $ cephadm shell -m ${SPEC_DIR}/rgw_ingress -- ceph orch apply -i /mnt/rgw_ingress
    Copy to Clipboard Toggle word wrap
  8. 等待 ingress 部署并查询生成的端点:

    $ sudo cephadm shell -- ceph orch ls
    
    NAME                 	PORTS            	RUNNING  REFRESHED  AGE  PLACEMENT
    crash                                         	6/6  6m ago 	3d   *
    ingress.rgw.rgw      	10.0.0.89:8080,8898  	6/6  37s ago	60s  label:rgw
    mds.mds                   3/3  6m ago 	3d   controller-0;controller-1;controller-2
    mgr                       3/3  6m ago 	3d   controller-0;controller-1;controller-2
    mon                       3/3  6m ago 	3d   controller-0;controller-1;controller-2
    osd.default_drive_group   15  37s ago	3d   cephstorage-0;cephstorage-1;cephstorage-2
    rgw.rgw   ?:8090          3/3  37s ago	4m   label:rgw
    Copy to Clipboard Toggle word wrap
    $ curl 10.0.0.89:8080
    
    ---
    <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[ceph: root@controller-0 /]#
    —
    Copy to Clipboard Toggle word wrap

7.4.3. 创建或更新 Object Storage 服务端点

您必须创建或更新 Object Storage 服务(swift)端点,来配置您在用于部署 RGW ingress 的同一网络上保留的新虚拟 IP 地址(VIP)。

流程

  1. 列出当前的 swift 端点和服务:

     $ oc rsh openstackclient openstack endpoint list | grep 'swift.*object'
     $ oc rsh openstackclient openstack service list | grep 'swift.*object'
    Copy to Clipboard Toggle word wrap
  2. 如果服务和端点不存在,请创建缺少的 swift 资源:

    $ oc rsh openstackclient openstack service create --name swift --description 'OpenStack Object Storage' object-store
    $ oc rsh openstackclient openstack role add --user swift --project service member
    $ oc rsh openstackclient openstack role add --user swift --project service admin
    $ for i in public internal; do
          oc rsh openstackclient endpoint create --region regionOne object-store $i http://<RGW_VIP>:8080/swift/v1/AUTH_%\(tenant_id\)s
      done
    $ oc rsh openstackclient openstack role add --project admin --user admin swiftoperator
    Copy to Clipboard Toggle word wrap
    • <RGW_VIP& gt; 替换为 Ceph RGW 入口 VIP。
  3. 如果端点存在,请更新端点以指向正确的 RGW 入口 VIP:

    $ oc rsh openstackclient openstack endpoint set --url http://<RGW_VIP>:8080/swift/v1/AUTH_%\(tenant_id\)s <swift_public_endpoint_uuid>
    $ oc rsh openstackclient openstack endpoint set --url http://<RGW_VIP>:8080/swift/v1/AUTH_%\(tenant_id\)s <swift_internal_endpoint_uuid>
    $ oc rsh openstackclient openstack endpoint list | grep object
    | 0d682ad71b564cf386f974f90f80de0d | regionOne | swift        | object-store | True    | public    | http://172.18.0.100:8080/swift/v1/AUTH_%(tenant_id)s    |
    | b311349c305346f39d005feefe464fb1 | regionOne | swift        | object-store | True    | internal  | http://172.18.0.100:8080/swift/v1/AUTH_%(tenant_id)s    |
    Copy to Clipboard Toggle word wrap
    • <swift_public_endpoint_uuid > 替换为 swift 公共端点的 UUID。
    • <swift_internal_endpoint_uuid > 替换为 swift 内部端点的 UUID。
  4. 测试迁移的服务:

    $ oc rsh openstackclient openstack container list --debug
    
    ...
    ...
    ...
    REQ: curl -g -i -X GET http://keystone-public-openstack.apps.ocp.openstack.lab -H "Accept: application/json" -H "User-Agent: openstacksdk/1.0.2 keystoneauth1/5.1.3 python-requests/2.25.1 CPython/3.9.23"
    Starting new HTTP connection (1): keystone-public-openstack.apps.ocp.openstack.lab:80
    http://keystone-public-openstack.apps.ocp.openstack.lab:80 "GET / HTTP/1.1" 300 298
    RESP: [300] content-length: 298 content-type: application/json date: Mon, 14 Jul 2025 17:41:29 GMT location: http://keystone-public-openstack.apps.ocp.openstack.lab/v3/ server: Apache set-cookie: b5697f82cf3c19ece8be533395142512=d5c6a9ee2
    267c4b63e9f656ad7565270; path=/; HttpOnly vary: X-Auth-Token x-openstack-request-id: req-452e42c5-e60f-440f-a6e8-fe1b9ea89055
    RESP BODY: {"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "http://keystone-public-openstack.apps.ocp.openstack.lab/v3/"}], "media-types": [{"base": "applic
    ation/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
    GET call to http://keystone-public-openstack.apps.ocp.openstack.lab/ used request id req-452e42c5-e60f-440f-a6e8-fe1b9ea89055
    
    ...
    
    REQ: curl -g -i -X GET "http://172.18.0.100:8080/swift/v1/AUTH_44477474b0dc4b5b8911ceec23a22246?format=json" -H "User-Agent: openstacksdk/1.0.2 keystoneauth1/5.1.3 python-requests/2.25.1 CPython/3.9.23" -H "X-Auth-Token: {SHA256}ec5deca0be37bd8bfe659f132b9cdf396b8f409db5dc16972d50cbf3f28474d4"
    Starting new HTTP connection (1): 172.18.0.100:8080
    http://172.18.0.100:8080 "GET /swift/v1/AUTH_44477474b0dc4b5b8911ceec23a22246?format=json HTTP/1.1" 200 2
    RESP: [200] accept-ranges: bytes content-length: 2 content-type: application/json; charset=utf-8 date: Mon, 14 Jul 2025 17:41:31 GMT x-account-bytes-used: 0 x-account-bytes-used-actual: 0 x-account-container-count: 0 x-account-object-count: 0 x-account-storage-policy-default-placement-bytes-used: 0 x-account-storage-policy-default-placement-bytes-used-actual: 0 x-account-storage-policy-default-placement-container-count: 0 x-account-storage-policy-default-placement-object-count: 0 x-openstack-request-id: tx000001e95361131ccf694-006875414a-7753-default x-timestamp: 1752514891.25991 x-trans-id: tx000001e95361131ccf694-006875414a-7753-default
    RESP BODY: []
    GET call to http://172.18.0.100:8080/swift/v1/AUTH_44477474b0dc4b5b8911ceec23a22246?format=json used request id tx000001e95361131ccf694-006875414a-7753-default
    
    clean_up ListContainer:
    END return value: 0
    Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat