Este contenido no está disponible en el idioma seleccionado.
Appendix A. Building the Red Hat OpenStack Platform 8 HA Environment
The Red Hat Ceph Storage for the Overcloud guide provides instructions for deploying the type of highly available OpenStack environment described in this document. The Director Installation and Usage guide was also used for reference throughout the process.
A.1. Hardware Specification Copiar enlaceEnlace copiado en el portapapeles!
The following tables show the specifications used by the deployment tested for this document. For better results, increase the CPU, memory, storage, or NICs on your own test deployment.
Number of Computers | Assigned as… | CPUs | Memory | Disk space | Power mgmt. | NICs |
---|---|---|---|---|---|---|
1 | Director node | 4 | 6144 MB | 40 GB | IPMI | 2 (1 external; 1 on Provisioning) + 1 IPMI |
3 | Controller nodes | 4 | 6144 MB | 40 GB | IPMI | 3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI |
3 | Ceph Storage nodes | 4 | 6144 MB | 40 GB | IPMI | 3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI |
2 | Compute node (add more as needed) | 4 | 6144 MB | 40 GB | IPMI | 3 (2 bonded on Overcloud; 1 on Provisioning) + 1 IPMI |
The following list describes the general functions and connections associated with each non-director assignment:
- Controller nodes
- Most OpenStack services, other than storage, run on these controller nodes. All services are replicated across the three nodes (some active-active; some active-passive). Three nodes are required for reliable HA.
- Ceph storage nodes
- Storage services run on these nodes, providing pools of Ceph storage areas to the compute nodes. Again, three nodes are needed for HA.
- Compute nodes
- Virtual machines actually run on these compute nodes. You can have as many compute nodes as you need to meet your capacity requirements, including the ability to shut down compute nodes and migrate virtual machines between those nodes. Compute nodes must be connected to the storage network (so the VMs can access storage) and Tenant network (so VMs can access VMs on other compute nodes and also access public networks, to make their services available).
Physical NICs | Reason for Network | VLANs | Used to… |
---|---|---|---|
eth0 | Provisioning network (undercloud) | N/A | Manage all nodes from director (undercloud) |
eth1 and eth2 | Controller/External (overcloud) | N/A | Bonded NICs with VLANs |
External Network | VLAN 100 | Allow access from outside world to Tenant networks, Internal API, and OpenStack Horizon Dashboard | |
Internal API | VLAN 201 | Provide access to the internal API between compute and controller nodes | |
Storage access | VLAN 202 | Connect compute nodes to underlying Storage media | |
Storage management | VLAN 203 | Manage storage media | |
Tenant network | VLAN 204 | Provide tenant network services to OpenStack |
The following are also required:
- Provisioning network switch
- This switch must be able to connect the director system (undercloud) to all computers in the Red Hat OpenStack Platform 8 environment (overcloud). The NIC on each overcloud node that is connected to this switch must be able to PXE boot from the director. Also check that the switch has portfast set to enabled.
- Controller/External network switch
- This switch must be configured to do VLAN tagging for the VLANs shown in Figure 1. Only VLAN 100 traffic should be allowed to external networks.
- Fencing Hardware
- Hardware defined for use with Pacemaker is supported in this configuration. Supported fencing devices can be determined using the Pacemaker tool stonith. See Fencing the Controller Nodes of the the Director Installation and Usage guide for details.
A.2. Undercloud Configuration Files Copiar enlaceEnlace copiado en el portapapeles!
This section shows relevant configuration files from the test configuration used for this document. If you change IP address ranges, consider making a diagram similar to Figure 1.1, “OpenStack HA environment deployed through director” to track your resulting address settings.
instackenv.json
undercloud.conf
network-environment.yaml
A.3. Overcloud Configuration Files Copiar enlaceEnlace copiado en el portapapeles!
The following configuration files reflect the actual overcloud settings from the deployment used for this document.
/etc/haproxy/haproxy.cfg (Controller Nodes)
This file identifies the services that HAProxy manages. It contains the settings that define the services monitored by HAProxy. This file exists and is the same on all Controller nodes.
/etc/corosync/corosync.conf file (Controller Nodes)
This file defines the cluster infrastructure, and is available on all Controller nodes.
/etc/ceph/ceph.conf (Ceph Nodes)
This file contains Ceph high availability settings, including the hostnames and IP addresses of monitoring hosts.