Search

Chapter 1. Introduction

download PDF

Red Hat OpenStack Platform director creates a cloud environment called the Overcloud. The Overcloud contains a set of different node types that perform certain roles. One of these node types is the Controller node. The Controller is responsible for Overcloud administration and uses specific OpenStack components. An Overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node.

It is also possible to use an external load balancer to perform this distribution. For example, an organization might use their own hardware-based load balancer to handle traffic distribution to the Controller nodes. This guide provides the necessary details to help define the configuration for both an external load balancer and the Overcloud creation. This involved the following process:

  1. Installing and Configuring the Load Balancer - This guide includes some HAProxy options for load balancing and services. Translate the settings to the equivalent of your own external load balancer.
  2. Configuring and Deploying the Overcloud - This guide includes some Heat template parameters help the Overcloud integrate with the external load balancer. This mainly involves the IP addresses of the load balancer and potential nodes. This guide also includes the command to start the Overcloud deployment and its configuration to use the external load balancer.

1.1. Using Load Balancing in the Overcloud

The Overcloud uses a open source tool called HAProxy. HAProxy load-balances traffic to Controller nodes running OpenStack services. The haproxy package contains the haproxy daemon, which is started from the haproxy systemd service, along with logging features and sample configurations. However, the Overcloud also uses a high availability resource manager (Pacemaker) to control HAProxy itself as a highly available service (haproxy-clone). This means HAProxy runs on each Controller node and distributes traffic according to a set of rules defined in each configuration.

1.2. Defining an Example Scenario

This article uses the following scenario as an example:

  • An external load balacing server using HAProxy. This demonstrates how to use a federated HAProxy server. You can substitute this for another supported external load balancer.
  • One OpenStack Platform director node
  • An Overcloud that consists of:
  • 3 Controller nodes in a highly available cluster
  • 1 Compute node
  • Network isolation with VLANs

The scenario uses the following IP address assignments for each network:

  • Internal API: 172.16.20.0/24
  • Project: 172.16.22.0/24
  • Storage: 172.16.21.0/24
  • Storage Management: 172.16.19.0/24
  • External: 172.16.23.0/24

These IP ranges will include IP assignments for the Controller nodes and virtual IPs that the load balancer binds to OpenStack services.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.