Chapter 1. Introduction


Red Hat OpenStack Platform director creates a cloud environment called the Overcloud. The Overcloud contains a set of different node types that perform certain roles. One of these node types is the Controller node. The Controller is responsible for Overcloud administration and uses specific OpenStack components. An Overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node.

It is also possible to use an external load balancer to perform this distribution. For example, an organization might use their own hardware-based load balancer to handle traffic distribution to the Controller nodes. This guide provides the necessary details to help define the configuration for both an external load balancer and the Overcloud creation. This involved the following process:

  1. Installing and Configuring the Load Balancer - This guide includes some HAProxy options for load balancing and services. Translate the settings to the equivalent of your own external load balancer.
  2. Configuring and Deploying the Overcloud - This guide includes some Heat template parameters help the Overcloud integrate with the external load balancer. This mainly involves the IP addresses of the load balancer and potential nodes. This guide also includes the command to start the Overcloud deployment and its configuration to use the external load balancer.

1.1. Using Load Balancing in the Overcloud

The Overcloud uses a open source tool called HAProxy. HAProxy load-balances traffic to Controller nodes running OpenStack services. The haproxy package contains the haproxy daemon, which is started from the haproxy systemd service, along with logging features and sample configurations. However, the Overcloud also uses a high availability resource manager (Pacemaker) to control HAProxy itself as a highly available service (haproxy-clone). This means HAProxy runs on each Controller node and distributes traffic according to a set of rules defined in each configuration.

1.2. Defining an Example Scenario

This article uses the following scenario as an example:

  • An external load balacing server using HAProxy. This demonstrates how to use a federated HAProxy server. You can substitute this for another supported external load balancer.
  • One OpenStack Platform director node
  • An Overcloud that consists of:
  • 3 Controller nodes in a highly available cluster
  • 1 Compute node
  • Network isolation with VLANs

The scenario uses the following IP address assignments for each network:

  • Internal API: 172.16.20.0/24
  • Tenant: 172.16.22.0/24
  • Storage: 172.16.21.0/24
  • Storage Management: 172.16.19.0/24
  • External: 172.16.23.0/24

These IP ranges will include IP assignments for the Controller nodes and virtual IPs that the load balancer binds to OpenStack services.

Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.