Search

Chapter 9. Planning your environment according to object maximums

download PDF

Consider the following tested object maximums when you plan your OpenShift Container Platform cluster.

These guidelines are based on the largest possible cluster. For smaller clusters, the maximums are lower. There are many factors that influence the stated thresholds, including the etcd version or storage data format.

In most cases, exceeding these numbers results in lower overall performance. It does not necessarily mean that the cluster will fail.

9.1. OpenShift Container Platform Tested cluster maximums for major releases

Tested Cloud Platforms for OpenShift Container Platform 3.x: Red Hat OpenStack Platform (RHOSP), Amazon Web Services and Microsoft Azure. Tested Cloud Platforms for OpenShift Container Platform 4.x: Amazon Web Services, Microsoft Azure and Google Cloud Platform.

Maximum type3.x tested maximum4.x tested maximum

Number of Nodes

2,000

2,000

Number of Pods [a]

150,000

150,000

Number of Pods per node

250

500 [b]

Number of Pods per core

There is no default value.

There is no default value.

Number of Namespaces [c]

10,000

10,000

Number of Builds

10,000 (Default pod RAM 512 Mi) - Pipeline Strategy

10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy

Number of Pods per namespace [d]

25,000

25,000

Number of Services [e]

10,000

10,000

Number of Services per Namespace

5,000

5,000

Number of Back-ends per Service

5,000

5,000

Number of Deployments per Namespace [d]

2,000

2,000

[a] The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.
[b] This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default maxPods is still 250. To get to 500 maxPods, the cluster must be created with a hostPrefix of 22 in the install-config.yaml file and maxPods set to 500 using a custom KubeletConfig. The maximum number of Pods with attached Persistant Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.
[c] When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.
[d] There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
[e] Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.

9.2. OpenShift Container Platform tested cluster maximums

Maximum type3.11 tested maximum4.1 tested maximum4.2 tested maximum4.3 tested maximum

Number of Nodes

2,000

2,000

2,000

2,000

Number of Pods [a]

150,000

150,000

150,000

150,000

Number of Pods per node

250

250

250

500

Number of Pods per core

There is no default value.

There is no default value.

There is no default value.

There is no default value.

Number of Namespaces [b]

10,000

10,000

10,000

10,000

Number of Builds

10,000 (Default pod RAM 512 Mi) - Pipeline Strategy

10,000 (Default pod RAM 512 Mi) - Pipeline Strategy

10,000 (Default pod RAM 512 Mi) - Pipeline Strategy

10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy

Number of Pods per Namespace [c]

25,000

25,000

25,000

25,000

Number of Services [d]

10,000

10,000

10,000

10,000

Number of Services per Namespace

5,000

5,000

5,000

5,000

Number of Back-ends per Service

5,000

5,000

5,000

5,000

Number of Deployments per Namespace [c]

2,000

2,000

2,000

2,000

[a] The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.
[b] When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.
[c] There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
[d] Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.

In OpenShift Container Platform 4.3, half of a CPU core (500 millicore) is reserved by the system compared to OpenShift Container Platform 3.11 and previous versions.

9.3. OpenShift Container Platform environment and configuration on which the cluster maximums are tested

AWS cloud platform:

NodeFlavorvCPURAM(GiB)Disk typeDisk size(GiB)/IOSCountRegion

Master/etcd [a]

r5.4xlarge

16

128

io1

220 / 3000

3

us-west-2

Infra [b]

m5.12xlarge

48

192

gp2

100

3

us-west-2

Workload [c]

m5.4xlarge

16

64

gp2

500 [d]

1

us-west-2

Worker

m5.2xlarge

8

32

gp2

100

3/25/250/2000 [e]

us-west-2

[a] io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
[b] Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.
[c] Workload node is dedicated to run performance and scalability workload generators.
[d] Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
[e] Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.

Azure cloud platform:

NodeFlavorvCPURAM(GiB)Disk typeDisk size(GiB)/iopsCountRegion

Master/etcd [a]

Standard_D8s_v3

8

32

Premium SSD

1024 ( P30 )

3

centralus

Infra [b]

Standard_D16s_v3

16

64

Premium SSD

1024 ( P30 )

3

centralus

Worker

Standard_D4s_v3

4

16

Premium SSD

1024 ( P30 )

3/25/100/110 [c]

centralus

[a] For a higher IOPs and throughput cap, 1024GB disks are used for master/etcd nodes because etcd is I/O intensive and latency sensitive.
[b] Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.
[c] The cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.

9.4. How to plan your environment according to tested cluster maximums

Important

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.

The numbers noted in this documentation are based on Red Hat’s test methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.

While planning your environment, determine how many pods are expected to fit per node:

Required Pods per Cluster / Pods per Node = Total Number of Nodes Needed

The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in How to plan your environment according to application requirements.

Example scenario

If you want to scope your cluster at 2200 pods, assuming the 250 maximum pods per node, you would need at least nine nodes:

2200 / 250 = 8.8

If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:

2200 / 20 = 110

Where:

Required Pods per Cluster / Total Number of Nodes = Expected Pods per Node

9.5. How to plan your environment according to application requirements

Consider an example application environment:

Pod typePod quantityMax memoryCPU coresPersistent storage

apache

100

500 MB

0.5

1 GB

node.js

200

1 GB

1

1 GB

postgresql

100

1 GB

2

10 GB

JBoss EAP

100

1 GB

1

1 GB

Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage.

Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.

Node typeQuantityCPUsRAM (GB)

Nodes (option 1)

100

4

16

Nodes (option 2)

50

8

32

Nodes (option 3)

25

16

64

Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.