搜索

32.2. 显示恢复集群的状态

download PDF

要配置主集群和灾难恢复集群,以便可以显示这两个集群的状态,请执行以下步骤。(RHEL 8.2 及更新的版本)

注意

设置灾难恢复集群不会自动配置资源或复制数据。这些项目必须由用户手动配置。

在本例中:

  • 主群集将命名为 PrimarySite,由节点 z1.example.com.z2.example.com 组成。
  • 灾难恢复站点集群将被命名为 DRsite,由节点 z3.example.comz4.example.com 组成。

这个示例设置了一个没有配置资源或保护保护的基本集群。

流程

  1. 身份验证将用于这两个集群的所有节点。

    [root@z1 ~]# pcs host auth z1.example.com z2.example.com z3.example.com z4.example.com -u hacluster -p password
    z1.example.com: Authorized
    z2.example.com: Authorized
    z3.example.com: Authorized
    z4.example.com: Authorized
  2. 创建用作集群的主集群并为集群启动集群服务的集群。

    [root@z1 ~]# pcs cluster setup PrimarySite z1.example.com z2.example.com --start
    {...}
    Cluster has been successfully set up.
    Starting cluster on hosts: 'z1.example.com', 'z2.example.com'...
  3. 创建用作灾难恢复集群的集群,,为集群启动集群服务。

    [root@z1 ~]# pcs cluster setup DRSite z3.example.com z4.example.com --start
    {...}
    Cluster has been successfully set up.
    Starting cluster on hosts: 'z3.example.com', 'z4.example.com'...
  4. 从主集群中的一个节点中,将第二个集群设置为恢复站点。该恢复站点由其中一个节点的名称定义。

    [root@z1 ~]# pcs dr set-recovery-site z3.example.com
    Sending 'disaster-recovery config' to 'z3.example.com', 'z4.example.com'
    z3.example.com: successful distribution of the file 'disaster-recovery config'
    z4.example.com: successful distribution of the file 'disaster-recovery config'
    Sending 'disaster-recovery config' to 'z1.example.com', 'z2.example.com'
    z1.example.com: successful distribution of the file 'disaster-recovery config'
    z2.example.com: successful distribution of the file 'disaster-recovery config'
  5. 检查灾难恢复配置。

    [root@z1 ~]# pcs dr config
    Local site:
      Role: Primary
    Remote site:
      Role: Recovery
      Nodes:
        z3.example.com
        z4.example.com
  6. 检查主集群的状态以及主集群中节点的灾难恢复集群。

    [root@z1 ~]# pcs dr status
    --- Local cluster - Primary site ---
    Cluster name: PrimarySite
    
    WARNINGS:
    No stonith devices and stonith-enabled is not false
    
    Cluster Summary:
      * Stack: corosync
      * Current DC: z2.example.com (version 2.0.3-2.el8-2c9cea563e) - partition with quorum
      * Last updated: Mon Dec  9 04:10:31 2019
      * Last change:  Mon Dec  9 04:06:10 2019 by hacluster via crmd on z2.example.com
      * 2 nodes configured
      * 0 resource instances configured
    
    Node List:
      * Online: [ z1.example.com z2.example.com ]
    
    Full List of Resources:
      * No resources
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled
    
    
    --- Remote cluster - Recovery site ---
    Cluster name: DRSite
    
    WARNINGS:
    No stonith devices and stonith-enabled is not false
    
    Cluster Summary:
      * Stack: corosync
      * Current DC: z4.example.com (version 2.0.3-2.el8-2c9cea563e) - partition with quorum
      * Last updated: Mon Dec  9 04:10:34 2019
      * Last change:  Mon Dec  9 04:09:55 2019 by hacluster via crmd on z4.example.com
      * 2 nodes configured
      * 0 resource instances configured
    
    Node List:
      * Online: [ z3.example.com z4.example.com ]
    
    Full List of Resources:
      * No resources
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

有关灾难恢复配置的其他显示选项,请查看 pcs dr 命令的帮助屏幕。

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.