Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Recommended specifications for your large Red Hat OpenStack deployment
You can use the provided recommendations to scale your large cluster deployment.
The values in the following procedures are based on testing that the Red Hat OpenStack Platform Performance & Scale Team performed and can vary according to individual environments.
2.1. Undercloud system requirements
For best performance, install the undercloud node on a physical server. However, if you use a virtualized undercloud node, ensure that the virtual machine has enough resources similar to a physical machine described in the following table.
System requirement | Description |
---|---|
Counts | 1 |
CPUs | 32 cores, 64 threads |
Disk | 500 GB root disk (1x SSD or 2x hard drives with 7200RPM; RAID 1) |
Memory | 256 GB |
Network | 25 Gbps network interfaces or 10 Gbps network interfaces |
2.2. Overcloud Controller nodes system requirements
All control plane services must run on exactly 3 nodes. Typically, all control plane services are deployed across 3 Controller nodes.
Scaling controller services
To increase the resources available for controller services, you can scale these services to additional nodes. For example, you can deploy the db
or messaging
controller services on dedicated nodes to reduce the load on the Controller nodes.
To scale controller services, use composable roles to define the set of services that you want to scale. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum.
The control plane in this example consists of the following 9 nodes:
- 3 Controller nodes
- 3 Database nodes
- 3 Messaging nodes
For more information, see Composable services and custom roles in Customizing your Red Hat OpenStack Platform deployment.
For questions about scaling controller services with composable roles, contact Red Hat Global Consulting.
Storage considerations
Include sufficient storage when you plan Controller nodes in your overcloud deployment.
If your deployment does not include Ceph storage, use a dedicated disk or node for Object Storage (swift) that overcloud workloads or Image (glance) services can use. If you use Object Storage on Controller nodes, use an NVMe device separate from the root disk to reduce disk use during object data storage.
The Block Storage service (cinder) requires extensive concurrent operations to upload volumes to the Image Storage service (glance). Images put a considerable I/O load on the Controller disk. This is not a recommended workflow for bulk operations, but if it is necessary, use SSD disks on the Controller node to provide a higher IOPS for such operations.
- Older Telemetry services based on Ceilometer, gnocchi and the Alarming service (aodh) are disabled by default and are not recommended because of a negative affect on performance impact. If you enable these Telemetry services, gnocchi is I/O intensive and sends metrics to Object Storage nodes when Ceph is not enabled.
- All large scale testing is done on environments with a Director-deployed Ceph cluster.
CPU considerations
The number of API calls, AMQP messages, and database queries that the Controller nodes receive influences the CPU memory consumption on the Controller nodes. The ability of each Red Hat OpenStack Platform (RHOSP) component to concurrently process and perform tasks is also limited by the number of worker threads that are configured for each of the individual RHOSP components. To avoid a degradation of performance, the maximum number of worker threads for components with a large number of tasks on a Controller node is limited by the CPU count.
The number of worker threads for components that RHOSP director configures on a Controller is limited by the CPU count.
The following specifications are recommended for large scale environments with more than 700 nodes when you use Ceph Storage nodes in your deployment:
System requirement | Description |
---|---|
Counts | 3 Controller nodes with controller services contained within the Controller role. Optionally, to scale controller services on dedicated nodes, use composable services. For more information, see Composable services and customer roles in the Customizing your Red Hat OpenStack Platform deployment guide. |
CPUs | 2 sockets each with 32 cores, 64 threads |
Disk | 500GB root disk (1x SSD or 2x hard drives with 7200RPM; RAID 1) 500GB dedicated disk for Swift (1x SSD or 1x NVMe) Optional: 500GB disk for image caching (1x SSD or 2x hard drives with 7200RPM; RAID 1) |
Memory | 384GB |
Network | 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds:
|
The following specifications are recommended for large scale environments with more than 700 nodes when you do not use Ceph Storage nodes in your deployment:
System requirement | Description |
---|---|
Counts | 3 Controller nodes with controller services contained within the Controller role. Optionally, to scale controller services on dedicated nodes, use composable services. For more information, see Composable services and customer roles in the Customizing your Red Hat OpenStack Platform deployment guide. |
CPUs | 2 sockets each with 32 cores, 64 threads |
Disk | 500GB root disk (1x SSD ) 500GB dedicated disk for Swift (1x SSD or 1x NVMe) Optional: 500GB disk for image caching (1x SSD or 2x hard drives with 7200RPM; RAID 1) |
Memory | 384GB |
Network | 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds:
|
2.3. Overcloud Compute nodes system requirements
When you plan your overcloud deployment, review the recommended system requirements for Compute nodes.
System requirement | Description |
---|---|
Counts | Red Hat has tested a scale of 750 nodes with various composable compute roles. |
CPUs | 2 sockets each with 12 cores, 24 threads |
Disk | 500 GB root disk |
Memory | 128 GB (64 GB per NUMA node); 2 GB is reserved for the host out by default. With Distributed Virtual Routing, increase the reserved RAM to 5 GB. |
Network | 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds:
|
2.4. Red Hat Ceph Storage nodes system requirements
For Ceph Storage nodes system requirements, see the following resources:
- For more information about hardware prerequisites for Ceph nodes, see General principles for selecting hardware in the Red Hat Storage 4 Hardware Guide.
- For more information about deployment configuration for Ceph nodes, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
- For more information about changing the storage replication number, see Pools, placement groups, and CRUSH Configuration reference in the Red Hat Ceph Storage Configuration Guide.