Search

Appendix A. Building the Red Hat OpenStack Platform 8 HA Environment

download PDF

The Red Hat Ceph Storage for the Overcloud guide provides instructions for deploying the type of highly available OpenStack environment described in this document. The Director Installation and Usage guide was also used for reference throughout the process.

A.1. Hardware Specification

The following tables show the specifications used by the deployment tested for this document. For better results, increase the CPU, memory, storage, or NICs on your own test deployment.

Table A.1. Physical Computers
Number of ComputersAssigned as…​CPUsMemoryDisk spacePower mgmt.NICs

1

Director node

4

6144 MB

40 GB

IPMI

2 (1 external; 1 on Provisioning) + 1 IPMI

3

Controller nodes

4

6144 MB

40 GB

IPMI

3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI

3

Ceph Storage nodes

4

6144 MB

40 GB

IPMI

3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI

2

Compute node (add more as needed)

4

6144 MB

40 GB

IPMI

3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI

The following list describes the general functions and connections associated with each non-director assignment:

Controller nodes
Most OpenStack services, other than storage, run on these controller nodes. All services are replicated across the three nodes (some active-active; some active-passive). Three nodes are required for reliable HA.
Ceph storage nodes
Storage services run on these nodes, providing pools of Ceph storage areas to the compute nodes. Again, three nodes are needed for HA.
Compute nodes
Virtual machines actually run on these compute nodes. You can have as many compute nodes as you need to meet your capacity requirements, including the ability to shut down compute nodes and migrate virtual machines between those nodes. Compute nodes must be connected to the storage network (so the VMs can access storage) and Tenant network (so VMs can access VMs on other compute nodes and also access public networks, to make their services available).
Table A.2. Physical and Virtual Networks
Physical NICsReason for NetworkVLANsUsed to…​

eth0

Provisioning network (undercloud)

N/A

Manage all nodes from director (undercloud)

eth1 and eth2

Controller/External (overcloud)

N/A

Bonded NICs with VLANs

 

External Network

VLAN 100

Allow access from outside world to Tenant networks, Internal API, and OpenStack Horizon Dashboard

 

Internal API

VLAN 201

Provide access to the internal API between compute and controller nodes

 

Storage access

VLAN 202

Connect compute nodes to underlying Storage media

 

Storage management

VLAN 203

Manage storage media

 

Tenant network

VLAN 204

Provide tenant network services to OpenStack

The following are also required:

Provisioning network switch
This switch must be able to connect the director system (undercloud) to all computers in the Red Hat OpenStack Platform 8 environment (overcloud). The NIC on each overcloud node that is connected to this switch must be able to PXE boot from the director. Also check that the switch has portfast set to enabled.
Controller/External network switch
This switch must be configured to do VLAN tagging for the VLANs shown in Figure 1. Only VLAN 100 traffic should be allowed to external networks.
Fencing Hardware
Hardware defined for use with Pacemaker is supported in this configuration. Supported fencing devices can be determined using the Pacemaker tool stonith. See Fencing the Controller Nodes of the the Director Installation and Usage guide for details.

A.2. Undercloud Configuration Files

This section shows relevant configuration files from the test configuration used for this document. If you change IP address ranges, consider making a diagram similar to Figure 1.1, “OpenStack HA environment deployed through director” to track your resulting address settings.

instackenv.json

{
      "nodes": [
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.11",
          "mac": [
            "2c:c2:60:3b:b3:94"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.12",
          "mac": [
            "2c:c2:60:51:b7:fb"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.13",
          "mac": [
            "2c:c2:60:76:ce:a5"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.51",
          "mac": [
            "2c:c2:60:08:b1:e2"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.52",
          "mac": [
            "2c:c2:60:20:a1:9e"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.53",
          "mac": [
            "2c:c2:60:58:10:33"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "1",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.101",
          "mac": [
            "2c:c2:60:31:a9:55"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "2",
          "pm_user": "admin"
        },
        {
          "pm_password": "testpass",
          "memory": "6144",
          "pm_addr": "10.100.0.102",
          "mac": [
            "2c:c2:60:0d:e7:d1"
          ],
          "pm_type": "pxe_ipmitool",
          "disk": "40",
          "arch": "x86_64",
          "cpu": "2",
          "pm_user": "admin"
         }
      ],
      "overcloud": {"password": "7adbbbeedc5b7a07ba1917e1b3b228334f9a2d4e",
      "endpoint": "http://192.168.1.150:5000/v2.0/"
                   }
}

undercloud.conf

[DEFAULT]
image_path = /home/stack/images
local_ip = 10.200.0.1/24
undercloud_public_vip = 10.200.0.2
undercloud_admin_vip = 10.200.0.3
undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem
local_interface = eth0
masquerade_network = 10.200.0.0/24
dhcp_start = 10.200.0.5
dhcp_end = 10.200.0.24
network_cidr = 10.200.0.0/24
network_gateway = 10.200.0.1
#discovery_interface = br-ctlplane
discovery_iprange = 10.200.0.150,10.200.0.200
discovery_runbench = 1
undercloud_admin_password = testpass
...

network-environment.yaml

resource_registry:
  OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
  OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml

parameter_defaults:
  InternalApiNetCidr: 172.16.0.0/24
  TenantNetCidr: 172.17.0.0/24
  StorageNetCidr: 172.18.0.0/24
  StorageMgmtNetCidr: 172.19.0.0/24
  ExternalNetCidr: 192.168.1.0/24
  InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
  TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
  StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
  StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
  # Leave room for floating IPs in the External allocation pool
  ExternalAllocationPools: [{'start': '192.168.1.150', 'end': '192.168.1.199'}]
  InternalApiNetworkVlanID: 201
  StorageNetworkVlanID: 202
  StorageMgmtNetworkVlanID: 203
  TenantNetworkVlanID: 204
  ExternalNetworkVlanID: 100
  # Set to the router gateway on the external network
  ExternalInterfaceDefaultRoute: 192.168.1.1
  # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
  NeutronExternalNetworkBridge: "''"
  # Customize bonding options if required
  BondInterfaceOvsOptions:
    "bond_mode=active-backup lacp=off other_config:bond-miimon-interval=100"

A.3. Overcloud Configuration Files

The following configuration files reflect the actual overcloud settings from the deployment used for this document.

/etc/haproxy/haproxy.cfg (Controller Nodes)

This file identifies the services that HAProxy manages. It contains the settings that define the services monitored by HAProxy. This file exists and is the same on all Controller nodes.

# This file managed by Puppet
global
  daemon
  group  haproxy
  log  /dev/log local0
  maxconn  10000
  pidfile  /var/run/haproxy.pid
  user  haproxy

defaults
  log  global
  mode  tcp
  option  tcpka
  option  tcplog
  retries  3
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen ceilometer
  bind 172.16.0.10:8777
  bind 192.168.1.150:8777
  server overcloud-controller-0 172.16.0.13:8777 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8777 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8777 check fall 5 inter 2000 rise 2

listen cinder
  bind 172.16.0.10:8776
  bind 192.168.1.150:8776
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8776 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8776 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8776 check fall 5 inter 2000 rise 2

listen glance_api
  bind 172.18.0.10:9292
  bind 192.168.1.150:9292
  option httpchk GET /
  server overcloud-controller-0 172.18.0.17:9292 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.18.0.15:9292 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.18.0.16:9292 check fall 5 inter 2000 rise 2

listen glance_registry
  bind 172.16.0.10:9191
  server overcloud-controller-0 172.16.0.13:9191 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:9191 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:9191 check fall 5 inter 2000 rise 2

listen haproxy.stats
  bind 10.200.0.6:1993
  mode http
  stats enable
  stats uri /

listen heat_api
  bind 172.16.0.10:8004
  bind 192.168.1.150:8004
  mode http
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8004 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8004 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8004 check fall 5 inter 2000 rise 2

listen heat_cfn
  bind 172.16.0.10:8000
  bind 192.168.1.150:8000
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8000 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8000 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8000 check fall 5 inter 2000 rise 2

listen heat_cloudwatch
  bind 172.16.0.10:8003
  bind 192.168.1.150:8003
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8003 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8003 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8003 check fall 5 inter 2000 rise 2

listen horizon
  bind 172.16.0.10:80
  bind 192.168.1.150:80
  cookie SERVERID insert indirect nocache
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:80 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:80 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:80 check fall 5 inter 2000 rise 2

listen keystone_admin
  bind 172.16.0.10:35357
  bind 192.168.1.150:35357
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:35357 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:35357 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:35357 check fall 5 inter 2000 rise 2

listen keystone_public
  bind 172.16.0.10:5000
  bind 192.168.1.150:5000
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:5000 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:5000 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:5000 check fall 5 inter 2000 rise 2

listen mysql
  bind 172.16.0.10:3306
  option httpchk
  stick on dst
  stick-table type ip size 1000
  timeout client 0
  timeout server 0
  server overcloud-controller-0 172.16.0.13:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2
  server overcloud-controller-1 172.16.0.14:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2
  server overcloud-controller-2 172.16.0.15:3306 backup check fall 5 inter 2000 on-marked-down shutdown-sessions port 9200 rise 2

listen neutron
  bind 172.16.0.10:9696
  bind 192.168.1.150:9696
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:9696 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:9696 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:9696 check fall 5 inter 2000 rise 2

listen nova_ec2
  bind 172.16.0.10:8773
  bind 192.168.1.150:8773
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8773 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8773 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8773 check fall 5 inter 2000 rise 2

listen nova_metadata
  bind 172.16.0.10:8775
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8775 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8775 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8775 check fall 5 inter 2000 rise 2

listen nova_novncproxy
  bind 172.16.0.10:6080
  bind 192.168.1.150:6080
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:6080 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:6080 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:6080 check fall 5 inter 2000 rise 2

listen nova_osapi
  bind 172.16.0.10:8774
  bind 192.168.1.150:8774
  option httpchk GET /
  server overcloud-controller-0 172.16.0.13:8774 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:8774 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:8774 check fall 5 inter 2000 rise 2

listen redis
  bind 172.16.0.11:6379
  balance first
  option tcp-check
  tcp-check send info\ replication\r\n
  tcp-check expect string role:master
  timeout client 0
  timeout server 0
  server overcloud-controller-0 172.16.0.13:6379 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.16.0.14:6379 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.16.0.15:6379 check fall 5 inter 2000 rise 2

listen swift_proxy_server
  bind 172.18.0.10:8080
  bind 192.168.1.150:8080
  option httpchk GET /info
  server overcloud-controller-0 172.18.0.17:8080 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 172.18.0.15:8080 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 172.18.0.16:8080 check fall 5 inter 2000 rise 2

/etc/corosync/corosync.conf file (Controller Nodes)

This file defines the cluster infrastructure, and is available on all Controller nodes.

totem {
version: 2
secauth: off
cluster_name: tripleo_cluster
transport: udpu
}

nodelist {
  node {
        ring0_addr: overcloud-controller-0
        nodeid: 1
       }
  node {
        ring0_addr: overcloud-controller-1
        nodeid: 2
       }
  node {
        ring0_addr: overcloud-controller-2
        nodeid: 3
       }
}

quorum {
provider: corosync_votequorum

}

logging {
to_syslog: yes
}

/etc/ceph/ceph.conf (Ceph Nodes)

This file contains Ceph high availability settings, including the hostnames and IP addresses of monitoring hosts.

[global]
osd_pool_default_pgp_num = 128
osd_pool_default_min_size = 1
auth_service_required = cephx
mon_initial_members = overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
fsid = 8c835acc-6838-11e5-bb96-2cc260178a92
cluster_network = 172.19.0.11/24
auth_supported = cephx
auth_cluster_required = cephx
mon_host = 172.18.0.17,172.18.0.15,172.18.0.16
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_pg_num = 128
public_network = 172.18.0.17/24
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.