4.3. 在 Pacemaker 中配置 OpenStack 服务
大多数服务都配置为 克隆设置 资源(或 克隆),它们在每个控制器上启动的方式相同,并设置为始终在每个控制器上运行。如果需要在多个节点上处于活动状态,则需要克隆服务。因此,您只能克隆同时在多个节点上活跃的服务(例如。集群感知服务。
其他服务配置为 Multi-state 资源。多状态资源 是专用克隆的类型: 与普通克隆 不同,多状态资源 可以处于 主 状态 或从 状态。当实例启动时,它必须处于 slave 状态。除此之外,任何一个状态的名称都没有任何特殊含义。但是,这些状态允许同一服务的克隆在不同的规则或约束下运行。
请记住,即使服务可以同时在多个控制器上运行,但控制器本身可能无法侦听实际访问这些服务所需的 IP 地址。
克隆设置资源(克隆)
以下是 pcs status 中的克隆设置:
Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-keystone-clone [openstack-keystone] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-c openstack-cinder-volume Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] openstack-cinder-volume Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
对于每个 Clone Set 资源,您可以看到以下内容:
- Pacemaker 为服务分配的名称
- 实际服务名称
- 启动或停止服务的控制器
使用 Clone Set 时,该服务旨在在所有控制器上启动相同的方式。要查看特定克隆服务(如 haproxy 服务)的详细信息,请使用 pcs resource show 命令。例如:
$ sudo pcs resource show haproxy-clone Clone: haproxy-clone Resource: haproxy (class=systemd type=haproxy) Operations: start interval=0s timeout=60s (haproxy-start-timeout-60s) monitor interval=60s (haproxy-monitor-interval-60s) $ sudo systemctl status haproxy haproxy.service - Cluster Controlled haproxy Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled) Drop-In: /run/systemd/system/haproxy.service.d └─50-pacemaker.conf Active: active (running) since Tue 2015-10-06 08:58:49 EDT; 1h 52min ago Main PID: 4215 (haproxy-systemd) CGroup: /system.slice/haproxy.service ├─4215 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid ├─4216 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds └─4217 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
haproxy-clone 示例显示 HAProxy 的资源设置。虽然 HAProxy 通过对所选服务负载平衡流量来提供高可用性服务,但此处的 HAProxy 本身具有高可用性,方法是将其配置为 Pacemaker 克隆服务。
从输出中,注意资源是名为 haproxy 的 systemd 服务。它还具有启动间隔和超时值,以及监控间隔。systemctl status 命令显示 haproxy 当前处于活动状态。haproxy 服务的实际运行进程列在输出的末尾。由于显示整个命令行,您可以看到与命令关联的配置文件(haproxy.cfg)和 PID 文件(haproxy.pid)。
对任何 克隆 集资源运行这些相同的命令,以查看其当前活动级别以及服务运行的命令的详细信息。请注意,由 Pacemaker 控制的 systemd 服务由 systemd 设置为 禁用,因为您想要 Pacemaker,而不是系统引导过程来控制服务何时启动或停机。
有关 克隆设置 资源的更多信息,请参阅 高可用性附加组件参考中的资源克隆。https://access.redhat.com/documentation/zh-CN/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ch-advancedresource-HAAR.html#s1-resourceclones-HAAR
多状态资源(主/从)
Galera 和 Redis 服务作为 多状态资源 运行。以下是 pcs status 输出对于这两种类型的服务是什么:
[...] Master/Slave Set: galera-master [galera] Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ overcloud-controller-2 ] Slaves: [ overcloud-controller-0 overcloud-controller-1 ] [...]
对于 galera-master 资源,所有三个控制器都作为 Galera master 运行。对于 redis-master 资源,overcloud-controller-2 作为 master 运行,另外两个控制器则作为从设备运行。这意味着,galera 服务在所有这三个控制器的一组约束下运行,而 redis 可能会受到主控制器和从属控制器上的不同限制。
有关 Multi-State 资源的更多信息,请参阅 High Availability Add -On Reference 中的多状态资源:保存多个模式 的资源。
有关对 Galera 资源进行故障排除的更多信息,请参阅 第 6 章 使用 Galera。