2.2. 迁移 Ceph RGW


在这种情况下,假设 Ceph 已经是 >= 5,对于 HCI 或专用存储节点,OpenStack Controller 节点中的 RGW 守护进程将迁移到现有的外部 RHEL 节点(通常是在剩余的用例中为 HCI 环境或 CephStorage 节点的 Compute 节点)。

2.2.1. 要求

  • Ceph 是 >= 5,并由 cephadm/orchestrator 管理
  • undercloud 仍然可用:节点和网络由 TripleO 管理

2.2.2. Ceph 守护进程卡

Ceph 5+ 以守护进程在同一节点上并置的方式应用 严格的限制。生成的拓扑取决于可用的硬件,以及将要停用的 Controller 节点中存在的 Ceph 服务数量。以下文档描述了在常见的 TripleO 场景中,迁移 RGW 组件所需的步骤(并保留 HA 模型),其中 Controller 节点代表部署该服务的 spec 放置https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw作为一般规则,可以迁移的服务数量取决于集群中的可用节点数量。以下示意图涵盖了 Ceph 存储节点上 Ceph 守护进程的分布,在这种情况下,仅看到 RGW 和 RBD (无仪表板)需要至少三个节点:

|    |                     |             |
|----|---------------------|-------------|
| osd | mon/mgr/crash      | rgw/ingress |
| osd | mon/mgr/crash      | rgw/ingress |
| osd | mon/mgr/crash      | rgw/ingress |
Copy to Clipboard Toggle word wrap

使用仪表板,且没有 Manila 至少有四个节点(仪表板没有故障切换):

|     |                     |             |
|-----|---------------------|-------------|
| osd | mon/mgr/crash | rgw/ingress       |
| osd | mon/mgr/crash | rgw/ingress       |
| osd | mon/mgr/crash | dashboard/grafana |
| osd | rgw/ingress   | (free)            |
Copy to Clipboard Toggle word wrap

至少需要仪表板和 Manila 5 节点(且仪表板没有故障转移):

|     |                     |                         |
|-----|---------------------|-------------------------|
| osd | mon/mgr/crash       | rgw/ingress             |
| osd | mon/mgr/crash       | rgw/ingress             |
| osd | mon/mgr/crash       | mds/ganesha/ingress     |
| osd | rgw/ingress         | mds/ganesha/ingress     |
| osd | mds/ganesha/ingress | dashboard/grafana       |
Copy to Clipboard Toggle word wrap

2.2.3. 当前状态

(undercloud) [stack@undercloud-0 ~]$ metalsmith list


    +------------------------+    +----------------+
    | IP Addresses           |    |  Hostname      |
    +------------------------+    +----------------+
    | ctlplane=192.168.24.25 |    | cephstorage-0  |
    | ctlplane=192.168.24.10 |    | cephstorage-1  |
    | ctlplane=192.168.24.32 |    | cephstorage-2  |
    | ctlplane=192.168.24.28 |    | compute-0      |
    | ctlplane=192.168.24.26 |    | compute-1      |
    | ctlplane=192.168.24.43 |    | controller-0   |
    | ctlplane=192.168.24.7  |    | controller-1   |
    | ctlplane=192.168.24.41 |    | controller-2   |
    +------------------------+    +----------------+
Copy to Clipboard Toggle word wrap

SSH 到 controller-0 并检查 pacemaker 状态。这有助于您在开始 RGW 迁移前识别您需要的信息。

Full List of Resources:
  * ip-192.168.24.46	(ocf:heartbeat:IPaddr2):     	Started controller-0
  * ip-10.0.0.103   	(ocf:heartbeat:IPaddr2):     	Started controller-1
  * ip-172.17.1.129 	(ocf:heartbeat:IPaddr2):     	Started controller-2
  * ip-172.17.3.68  	(ocf:heartbeat:IPaddr2):     	Started controller-0
  * ip-172.17.4.37  	(ocf:heartbeat:IPaddr2):     	Started controller-1
  * Container bundle set: haproxy-bundle

[undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]:
    * haproxy-bundle-podman-0   (ocf:heartbeat:podman):  Started controller-2
    * haproxy-bundle-podman-1   (ocf:heartbeat:podman):  Started controller-0
    * haproxy-bundle-podman-2   (ocf:heartbeat:podman):  Started controller-1
Copy to Clipboard Toggle word wrap

使用 ip 命令识别存储网络的范围。

[heat-admin@controller-0 ~]$ ip -o -4 a

1: lo	inet 127.0.0.1/8 scope host lo\   	valid_lft forever preferred_lft forever
2: enp1s0	inet 192.168.24.45/24 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
2: enp1s0	inet 192.168.24.46/32 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
7: br-ex	inet 10.0.0.122/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever
8: vlan70	inet 172.17.5.22/24 brd 172.17.5.255 scope global vlan70\   	valid_lft forever preferred_lft forever
8: vlan70	inet 172.17.5.94/32 brd 172.17.5.255 scope global vlan70\   	valid_lft forever preferred_lft forever
9: vlan50	inet 172.17.2.140/24 brd 172.17.2.255 scope global vlan50\   	valid_lft forever preferred_lft forever
10: vlan30	inet 172.17.3.73/24 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
10: vlan30	inet 172.17.3.68/32 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
11: vlan20	inet 172.17.1.88/24 brd 172.17.1.255 scope global vlan20\   	valid_lft forever preferred_lft forever
12: vlan40	inet 172.17.4.24/24 brd 172.17.4.255 scope global vlan40\   	valid_lft forever preferred_lft forever
Copy to Clipboard Toggle word wrap

在本例中:

  • vlan30 代表 Storage Network,其中新的 RGW 实例应在 CephStorage 节点上启动
  • br-ex 代表外部网络,这是在当前环境中分配有前端 VIP 的外部网络

识别您之前在 haproxy 中具有的网络,并将它(通过 TripleO)传播到 CephStorage 节点。此网络用于保留供 Ceph 所有的新 VIP,并用作 RGW 服务的入口点。

SSH 到 controller-0 并检查当前的 HaProxy 配置,直到您找到 ceph_rgw 部分:

$ less /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

...
...
listen ceph_rgw
  bind 10.0.0.103:8080 transparent
  bind 172.17.3.68:8080 transparent
  mode http
  balance leastconn
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  option httpchk GET /swift/healthcheck
  option httplog
  option forwardfor
  server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2
  server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2
  server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2
Copy to Clipboard Toggle word wrap

仔细检查用作 HaProxy frontend 的网络:

[controller-0]$ ip -o -4 a

...
7: br-ex	inet 10.0.0.106/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever
...
Copy to Clipboard Toggle word wrap

如上一节中所述,controller-0 的检查显示您使用外部网络( Ceph Storage 节点中不存在)公开服务,您需要通过 TripleO 传播它。

更改用于定义 ceph-storage 网络接口并添加新 config 部分的 NIC 模板。

---
network_config:
- type: interface
  name: nic1
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
  routes: {{ ctlplane_host_routes }}
- type: vlan
  vlan_id: {{ storage_mgmt_vlan_id }}
  device: nic1
  addresses:
  - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }}
  routes: {{ storage_mgmt_host_routes }}
- type: interface
  name: nic2
  use_dhcp: false
  defroute: false
- type: vlan
  vlan_id: {{ storage_vlan_id }}
  device: nic2
  addresses:
  - ip_netmask: {{ storage_ip }}/{{ storage_cidr }}
  routes: {{ storage_host_routes }}
- type: ovs_bridge
  name: {{ neutron_physical_bridge_name }}
  dns_servers: {{ ctlplane_dns_nameservers }}
  domain: {{ dns_search_domains }}
  use_dhcp: false
  addresses:
  - ip_netmask: {{ external_ip }}/{{ external_cidr }}
  routes: {{ external_host_routes }}
  members:
  - type: interface
    name: nic3
    primary: true
Copy to Clipboard Toggle word wrap

另外,将 外部网络 添加到 metalsmith 使用的 baremetal.yaml 文件中,并运行 overcloud node provision 命令传递 --network-config 选项:

- name: CephStorage
  count: 3
  hostname_format: cephstorage-%index%
  instances:
  - hostname: cephstorage-0
  name: ceph-0
  - hostname: cephstorage-1
  name: ceph-1
  - hostname: cephstorage-2
  name: ceph-2
  defaults:
  profile: ceph-storage
  network_config:
      template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2
  networks:
  - network: ctlplane
      vif: true
  - network: storage
  - network: storage_mgmt
  - network: external
Copy to Clipboard Toggle word wrap
(undercloud) [stack@undercloud-0]$

openstack overcloud node provision
   -o overcloud-baremetal-deployed-0.yaml
   --stack overcloud
   --network-config -y
  $PWD/network/baremetal_deployment.yaml
Copy to Clipboard Toggle word wrap

检查 CephStorage 节点上的新网络:

[root@cephstorage-0 ~]# ip -o -4 a

1: lo	inet 127.0.0.1/8 scope host lo\   	valid_lft forever preferred_lft forever
2: enp1s0	inet 192.168.24.54/24 brd 192.168.24.255 scope global enp1s0\   	valid_lft forever preferred_lft forever
11: vlan40	inet 172.17.4.43/24 brd 172.17.4.255 scope global vlan40\   	valid_lft forever preferred_lft forever
12: vlan30	inet 172.17.3.23/24 brd 172.17.3.255 scope global vlan30\   	valid_lft forever preferred_lft forever
14: br-ex	inet 10.0.0.133/24 brd 10.0.0.255 scope global br-ex\   	valid_lft forever preferred_lft forever
Copy to Clipboard Toggle word wrap

现在,这是开始迁移 RGW 后端并在其上构建入口的时间。

2.2.6. 迁移 RGW 后端

要匹配卡图,您可以使用 cephadm 标签来引用应该部署给定守护进程类型的一组节点。

将 RGW 标签添加到 cephstorage 节点:

for i in 0 1 2; {
    ceph orch host label add cephstorage-$i rgw;
}
Copy to Clipboard Toggle word wrap
[ceph: root@controller-0 /]#

for i in 0 1 2; {
    ceph orch host label add cephstorage-$i rgw;
}

Added label rgw to host cephstorage-0
Added label rgw to host cephstorage-1
Added label rgw to host cephstorage-2

[ceph: root@controller-0 /]# ceph orch host ls

HOST       	ADDR       	LABELS      	STATUS
cephstorage-0  192.168.24.54  osd rgw
cephstorage-1  192.168.24.44  osd rgw
cephstorage-2  192.168.24.30  osd rgw
controller-0   192.168.24.45  _admin mon mgr
controller-1   192.168.24.11  _admin mon mgr
controller-2   192.168.24.38  _admin mon mgr

6 hosts in cluster
Copy to Clipboard Toggle word wrap

在 overcloud 部署期间,RGW 应用于 step2 (external_deployment_steps),cephadm 兼容 spec 在 ceph_mkspec ansible 模块的 /home/ceph-admin/specs/rgw 中生成。查找并修补 RGW spec,使用标签方法指定正确的放置,并将 rgw 后端端口改为 8090,以避免与 Ceph Ingress Daemon packet 冲突

[root@controller-0 heat-admin]# cat rgw

networks:
- 172.17.3.0/24
placement:
  hosts:
  - controller-0
  - controller-1
  - controller-2
service_id: rgw
service_name: rgw.rgw
service_type: rgw
spec:
  rgw_frontend_port: 8080
  rgw_realm: default
  rgw_zone: default
Copy to Clipboard Toggle word wrap

使用标签键对替换控制器节点进行补丁

---
networks:
- 172.17.3.0/24
placement:
  label: rgw
service_id: rgw
service_name: rgw.rgw
service_type: rgw
spec:
  rgw_frontend_port: 8090
  rgw_realm: default
  rgw_zone: default
Copy to Clipboard Toggle word wrap

(*) cephadm_check_port

使用编配器 CLI 应用新的 RGW spec:

$ cephadm shell -m /home/ceph-admin/specs/rgw
$ cephadm shell -- ceph orch apply -i /mnt/rgw
Copy to Clipboard Toggle word wrap

触发重新部署:

...
osd.9                     	cephstorage-2
rgw.rgw.cephstorage-0.wsjlgx  cephstorage-0  172.17.3.23:8090   starting
rgw.rgw.cephstorage-1.qynkan  cephstorage-1  172.17.3.26:8090   starting
rgw.rgw.cephstorage-2.krycit  cephstorage-2  172.17.3.81:8090   starting
rgw.rgw.controller-1.eyvrzw   controller-1   172.17.3.146:8080  running (5h)
rgw.rgw.controller-2.navbxa   controller-2   172.17.3.66:8080   running (5h)

...
osd.9                     	cephstorage-2
rgw.rgw.cephstorage-0.wsjlgx  cephstorage-0  172.17.3.23:8090  running (19s)
rgw.rgw.cephstorage-1.qynkan  cephstorage-1  172.17.3.26:8090  running (16s)
rgw.rgw.cephstorage-2.krycit  cephstorage-2  172.17.3.81:8090  running (13s)
Copy to Clipboard Toggle word wrap

此时,您需要确保新的 RGW 后端可在新端口上访问,但稍后您要在进程的端口 8080 上启用 IngressDaemon。因此,每个 RGW 节点( CephStorage 节点)上的 ssh 并添加 iptables 规则,以允许连接 Ceph 存储节点上的 8080 和 8090 端口。

iptables -I INPUT -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -m comment --comment "ceph rgw ingress" -j ACCEPT

iptables -I INPUT -p tcp -m tcp --dport 8090 -m conntrack --ctstate NEW -m comment --comment "ceph rgw backends" -j ACCEPT

for port in 8080 8090; {
    for i in 25 10 32; {
       ssh heat-admin@192.168.24.$i sudo iptables -I INPUT \
       -p tcp -m tcp --dport $port -m conntrack --ctstate NEW \
       -j ACCEPT;
   }
}
Copy to Clipboard Toggle word wrap

从 Controller 节点(如 controller-0)尝试访问 rgw 后端:

for i in 26 23 81; do {
    echo "---"
    curl 172.17.3.$i:8090;
    echo "---"
    echo
done
Copy to Clipboard Toggle word wrap

您应该观察以下内容:

---
Query 172.17.3.23
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
---

---
Query 172.17.3.26
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
---

---
Query 172.17.3.81
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
---
Copy to Clipboard Toggle word wrap

2.2.6.1. 注意

如果在 CephStorage 节点上迁移 RGW 后端,则没有 "internalAPI" 网络(如果是 HCI,则这不是 true)。重新配置 RGW keystone 端点,指向已传播的外部网络(请参阅上一节中)

[ceph: root@controller-0 /]# ceph config dump | grep keystone
global   basic rgw_keystone_url  http://172.16.1.111:5000

[ceph: root@controller-0 /]# ceph config set global rgw_keystone_url http://10.0.0.103:5000
Copy to Clipboard Toggle word wrap

2.2.7. 部署 Ceph IngressDaemon

HAProxy 通过 Pacemaker 管理由 TripleO:此时运行的三个实例将指向旧的 RGW 后端,从而导致错误且无法正常工作。由于您要部署 Ceph Ingress Daemon,因此需要删除现有的 ceph_rgw 配置,清理由 TripleO 创建的配置,再重启服务以确保其他服务不受此更改的影响。

每个 Controller 节点上的 SSH,从 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg 中删除以下内容:

listen ceph_rgw
  bind 10.0.0.103:8080 transparent
  mode http
  balance leastconn
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  option httpchk GET /swift/healthcheck
  option httplog
  option forwardfor
   server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2
  server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2
  server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2
Copy to Clipboard Toggle word wrap

重启 haproxy-bundle 并确保它已启动:

[root@controller-0 ~]# sudo pcs resource restart haproxy-bundle
haproxy-bundle successfully restarted


[root@controller-0 ~]# sudo pcs status | grep haproxy

  * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]:
    * haproxy-bundle-podman-0   (ocf:heartbeat:podman):  Started controller-0
    * haproxy-bundle-podman-1   (ocf:heartbeat:podman):  Started controller-1
    * haproxy-bundle-podman-2   (ocf:heartbeat:podman):  Started controller-2
Copy to Clipboard Toggle word wrap

双检查不再绑定到 8080'"

[root@controller-0 ~]# ss -antop | grep 8080
[root@controller-0 ~]#
Copy to Clipboard Toggle word wrap

此时,swift CLI 应该会失败:

(overcloud) [root@cephstorage-0 ~]# swift list

HTTPConnectionPool(host='10.0.0.103', port=8080): Max retries exceeded with url: /swift/v1/AUTH_852f24425bb54fa896476af48cbe35d3?format=json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc41beb0430>: Failed to establish a new connection: [Errno 111] Connection refused'))
Copy to Clipboard Toggle word wrap

您可以在 CephStorage 节点上开始部署 Ceph IngressDaemon。

为 HaProxy 和 Keepalived 设置所需的镜像

[ceph: root@controller-0 /]# ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest
[ceph: root@controller-0 /]# ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest
Copy to Clipboard Toggle word wrap

准备 ingress 规格并将其挂载到 cephadm:

$ sudo vim /home/ceph-admin/specs/rgw_ingress
Copy to Clipboard Toggle word wrap

粘贴以下内容:

---
service_type: ingress
service_id: rgw.rgw
placement:
  label: rgw
spec:
  backend_service: rgw.rgw
  virtual_ip: 10.0.0.89/24
  frontend_port: 8080
  monitor_port: 8898
  virtual_interface_networks:
    - 10.0.0.0/24
Copy to Clipboard Toggle word wrap

挂载生成的 spec,并使用编配器 CLI 应用它:

$ cephadm shell -m /home/ceph-admin/specs/rgw_ingress
$ cephadm shell -- ceph orch apply -i /mnt/rgw_ingress
Copy to Clipboard Toggle word wrap

等待入口部署并查询生成的端点:

[ceph: root@controller-0 /]# ceph orch ls

NAME                 	PORTS            	RUNNING  REFRESHED  AGE  PLACEMENT
crash                                         	6/6  6m ago 	3d   *
ingress.rgw.rgw      	10.0.0.89:8080,8898  	6/6  37s ago	60s  label:rgw
mds.mds                   3/3  6m ago 	3d   controller-0;controller-1;controller-2
mgr                       3/3  6m ago 	3d   controller-0;controller-1;controller-2
mon                       3/3  6m ago 	3d   controller-0;controller-1;controller-2
osd.default_drive_group   15  37s ago	3d   cephstorage-0;cephstorage-1;cephstorage-2
rgw.rgw   ?:8090          3/3  37s ago	4m   label:rgw
Copy to Clipboard Toggle word wrap
[ceph: root@controller-0 /]# curl  10.0.0.89:8080

---
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[ceph: root@controller-0 /]#
—
Copy to Clipboard Toggle word wrap

以上结果显示,您可以从 IngressDaemon 访问后端,这意味着您几乎准备好使用 swift CLI 与之交互。

2.2.8. 更新 object-store 端点

端点仍然指向 pacemaker 拥有的旧 VIP,但因为它仍然被其他服务使用,并且您在同一网络上保留一个新的 VIP,然后再更新 object-store 端点。

列出当前的端点:

(overcloud) [stack@undercloud-0 ~]$ openstack endpoint list | grep object

| 1326241fb6b6494282a86768311f48d1 | regionOne | swift    	| object-store   | True	| internal  | http://172.17.3.68:8080/swift/v1/AUTH_%(project_id)s |
| 8a34817a9d3443e2af55e108d63bb02b | regionOne | swift    	| object-store   | True	| public	| http://10.0.0.103:8080/swift/v1/AUTH_%(project_id)s  |
| fa72f8b8b24e448a8d4d1caaeaa7ac58 | regionOne | swift    	| object-store   | True	| admin 	| http://172.17.3.68:8080/swift/v1/AUTH_%(project_id)s |
Copy to Clipboard Toggle word wrap

更新指向 Ingress VIP 的端点:

(overcloud) [stack@undercloud-0 ~]$ openstack endpoint set --url "http://10.0.0.89:8080/swift/v1/AUTH_%(project_id)s" 95596a2d92c74c15b83325a11a4f07a3

(overcloud) [stack@undercloud-0 ~]$ openstack endpoint list | grep object-store
| 6c7244cc8928448d88ebfad864fdd5ca | regionOne | swift    	| object-store   | True	| internal  | http://172.17.3.79:8080/swift/v1/AUTH_%(project_id)s |
| 95596a2d92c74c15b83325a11a4f07a3 | regionOne | swift    	| object-store   | True	| public	| http://10.0.0.89:8080/swift/v1/AUTH_%(project_id)s   |
| e6d0599c5bf24a0fb1ddf6ecac00de2d | regionOne | swift    	| object-store   | True	| admin 	| http://172.17.3.79:8080/swift/v1/AUTH_%(project_id)s |
Copy to Clipboard Toggle word wrap

并为 internal 和 admin 重复同样的操作。测试迁移的服务。

(overcloud) [stack@undercloud-0 ~]$ swift list --debug

DEBUG:swiftclient:Versionless auth_url - using http://10.0.0.115:5000/v3 as endpoint
DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://10.0.0.115:5000/v3/auth/tokens
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.0.0.115:5000
DEBUG:urllib3.connectionpool:http://10.0.0.115:5000 "POST /v3/auth/tokens HTTP/1.1" 201 7795
DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "6f87c7ffdddf463bbc633980cfd02bb3", "name": "admin", "password_expires_at": null},


...
...
...

DEBUG:swiftclient:REQ: curl -i http://10.0.0.89:8080/swift/v1/AUTH_852f24425bb54fa896476af48cbe35d3?format=json -X GET -H "X-Auth-Token: gAAAAABj7KHdjZ95syP4c8v5a2zfXckPwxFQZYg0pgWR42JnUs83CcKhYGY6PFNF5Cg5g2WuiYwMIXHm8xftyWf08zwTycJLLMeEwoxLkcByXPZr7kT92ApT-36wTfpi-zbYXd1tI5R00xtAzDjO3RH1kmeLXDgIQEVp0jMRAxoVH4zb-DVHUos" -H "Accept-Encoding: gzip"
DEBUG:swiftclient:RESP STATUS: 200 OK
DEBUG:swiftclient:RESP HEADERS: {'content-length': '2', 'x-timestamp': '1676452317.72866', 'x-account-container-count': '0', 'x-account-object-count': '0', 'x-account-bytes-used': '0', 'x-account-bytes-used-actual': '0', 'x-account-storage-policy-default-placement-container-count': '0', 'x-account-storage-policy-default-placement-object-count': '0', 'x-account-storage-policy-default-placement-bytes-used': '0', 'x-account-storage-policy-default-placement-bytes-used-actual': '0', 'x-trans-id': 'tx00000765c4b04f1130018-0063eca1dd-1dcba-default', 'x-openstack-request-id': 'tx00000765c4b04f1130018-0063eca1dd-1dcba-default', 'accept-ranges': 'bytes', 'content-type': 'application/json; charset=utf-8', 'date': 'Wed, 15 Feb 2023 09:11:57 GMT'}
DEBUG:swiftclient:RESP BODY: b'[]'
Copy to Clipboard Toggle word wrap

针对 object-storage 运行 tempest 测试:

(overcloud) [stack@undercloud-0 tempest-dir]$  tempest run --regex tempest.api.object_storage
...
...
...
======
Totals
======
Ran: 141 tests in 606.5579 sec.
 - Passed: 128
 - Skipped: 13
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 657.5183 sec.

==============
Worker Balance
==============
 - Worker 0 (1 tests) => 0:10:03.400561
 - Worker 1 (2 tests) => 0:00:24.531916
 - Worker 2 (4 tests) => 0:00:10.249889
 - Worker 3 (30 tests) => 0:00:32.730095
 - Worker 4 (51 tests) => 0:00:26.246044
 - Worker 5 (6 tests) => 0:00:20.114803
 - Worker 6 (20 tests) => 0:00:16.290323
 - Worker 7 (27 tests) => 0:00:17.103827
Copy to Clipboard Toggle word wrap

2.2.9. 其它资源

提供 屏幕记录

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat