Chapter 3. Recommended resource requirements for RHACS
To ensure optimal performance for self-managed Red Hat Advanced Cluster Security (RHACS) components, allocate CPU and memory resources based on the specific scale of your environment. Calculate your infrastructure requirements by analyzing the number of monitored deployments, concurrent users, and unique images across your clusters. Accurate sizing prevents performance bottlenecks and ensures your deployment handles growth effectively.
3.1. Resource requirements for scaling based on deployment Copy linkLink copied to clipboard!
The recommended resource guidelines were developed by performing a focused test that created the following objects across a given number of namespaces:
- 10 deployments, with 3 pod replicas in a sleep state, mounting 4 secrets, 4 config maps
- 10 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the previous deployments
- 1 route pointing to the first of the previous services
- 10 secrets containing 2048 random string characters
- 10 config maps containing 2048 random string characters
During the analysis of results, the number of deployments is identified as a primary factor for increasing of used resources. And we are using the number of deployments for the estimation of required resources.
3.2. Central services (self-managed) Copy linkLink copied to clipboard!
If you are using Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), you do not need to review the requirements for Central services, because they are managed by Red Hat. You only need to look at the requirements for secured cluster services.
Central services contain the following components:
- Central
- Central DB
- StackRox Scanner
- Scanner V4
3.2.1. Central Copy linkLink copied to clipboard!
3.2.1.1. Memory and CPU requirements Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required to run Central. To determine sizing, consider the following data:
- The total number of monitored deployments across all secured clusters that are connected to a single Central deployment
- The number of concurrent web portal users
| Deployments | Concurrent web portal users | CPU | Memory |
|---|---|---|---|
| < 25,000 | 1 user | 2 cores | 8 GiB |
| < 25,000 | < 5 users | 2 cores | 8 GiB |
| < 50,000 | 1 user | 2 cores | 12 GiB |
| < 50,000 | < 5 users | 6 cores | 16 GiB |
3.2.2. Central DB Copy linkLink copied to clipboard!
3.2.2.1. Memory and CPU requirements Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required to run Central DB. To determine sizing, consider the following data:
- The total number of monitored deployments across all secured clusters that are connected to a single Central deployment
- The number of concurrent web portal users
| Deployments | Concurrent web portal users | CPU | Memory |
|---|---|---|---|
| < 25,000 | 1 user | 12 cores | 32 GiB |
| < 25,000 | < 5 users | 24 cores | 32 GiB |
| < 50,000 | 1 user | 16 cores | 32 GiB |
| < 50,000 | < 5 users | 32 cores | 32 GiB |
3.2.3. StackRox Scanner Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required for the StackRox Scanner deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters.
| Number of unique Images | Replicas | CPU | Memory |
|---|---|---|---|
| < 100 | 1 replica | 1 core | 1.5 GiB |
| < 500 | 1 replica | 2 cores | 2.5 GiB |
| < 2000 | 2 replicas | 2 cores | 2.5 GiB |
| < 5000 | 3 replicas | 2 cores | 2.5 GiB |
3.2.4. Scanner V4 Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required for the Scanner V4 deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters.
3.2.4.1. Scanner V4 Indexer Copy linkLink copied to clipboard!
| Number of unique images | Replicas | CPU | Memory |
|---|---|---|---|
| < 100 | 1 | 2 cores | 0.5 GiB |
| < 500 | 1 | 2 cores | 0.5 GiB |
| < 2000 | 2 | 3 cores | 1 GiB |
| < 5000 | 2 | 5 cores | 1 GiB |
| < 10000 | 3 | 6 cores | 1.5 GiB |
3.2.4.2. Scanner V4 Matcher Copy linkLink copied to clipboard!
| Number of unique images | Replicas | CPU | Memory |
|---|---|---|---|
| < 100 | 1 | 1 core | 1.3 GiB |
| < 500 | 1 | 1 core | 1.4 GiB |
| < 2000 | 2 | 3 cores | 1.5 GiB |
| < 5000 | 2 | 3 cores | 1.6 GiB |
| < 10000 | 3 | 3 cores | 1.7 GiB |
3.2.4.3. Scanner V4 DB Copy linkLink copied to clipboard!
| Number of unique images | Replicas | CPU | Memory |
|---|---|---|---|
| < 100 | 1 | 1 core | 4.5 GiB |
| < 500 | 1 | 3 cores | 5 GiB |
| < 2000 | 1 | 6 cores | 6 GiB |
| < 5000 | 1 | 6 cores | 6 GiB |
| < 10000 | 1 | 8 cores | 6 GiB |
3.3. Secured cluster services Copy linkLink copied to clipboard!
Secured cluster services contain the following components:
- Sensor
- Admission controller
Collector
NoteCollector component is not included on this page. Required resource requirements are listed on the default resource requirements page.
3.3.1. Sensor Copy linkLink copied to clipboard!
Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector.
3.3.2. Memory and CPU requirements Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster.
| Deployments | CPU | Memory |
|---|---|---|
| < 25,000 | 2 cores | 10 GiB |
| < 50,000 | 2 cores | 20 GiB |
3.3.3. Admission controller Copy linkLink copied to clipboard!
The admission controller prevents users from creating workloads that violate policies that you configure.
3.3.4. Memory and CPU requirements Copy linkLink copied to clipboard!
The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster.
| Deployments | CPU | Memory |
|---|---|---|
| < 25,000 | 0.5 cores | 300 MiB |
| < 50,000 | 0.5 cores | 600 MiB |
3.4. Resource requirements for virtual machine scanning Copy linkLink copied to clipboard!
When using Red Hat Advanced Cluster Security for Kubernetes (RHACS) to scan virtual machines (VMs) for vulnerabilities, you might need to adjust CPU and memory resource allocations for certain components to match the scale of your environment. Scanner V4 might require modified resources depending on the number of VMs and frequency of scans.
For more information about configuring VM scanning, see "Scanning virtual machines".
The following guidelines were developed by running tests that deployed large numbers of VMs and identifying the resources that determine the overall system throughput. The tests identified the number of VMs and the frequency of their scans as the primary driver of resource usage. These tests showed that you can achieve higher throughput by modifying Scanner V4 resources.
The following settings impact throughput when allocating resources at the OpenShift Container Platform level:
- Scanner V4 Matcher replicas: These are controlled by the Scanner V4 Matcher Horizontal Pod Autoscaler (HPA).
- Scanner V4 Matcher CPU limit
- Scanner V4 DB CPU limit
When calculating infrastructure requirements, use the standard guidance for the rest of the system, and adjust Scanner V4 resources according to the recommendations in the following table:
| Number of VMs | Scanner V4 Matcher HPA maximum replicas | Scanner V4 Matcher CPU limit | Scanner V4 DB CPU limit |
|---|---|---|---|
| < 4500 | 3 | 1 core | 4 cores |
| < 14000 | 3 | 1 core | 8 cores |
| < 25000 | 3 | 1 core | 16 cores |
| < 40000 | 3 | 2 cores | 32 cores |
| < 50000 | 6 | 2 cores | 32 cores |
3.4.1. Factors affecting throughput Copy linkLink copied to clipboard!
While Scanner V4 resources constrain throughput, other factors are relevant when determining the number of VMs that a specific RHACS deployment supports. Consider the following factors:
-
VM scan interval: System throughput is measured in VM scans per unit of time, and therefore scan interval, a configuration parameter of
roxagent, directly impacts the number of VMs that RHACS can handle. The figures in the previous table were calculated for a scan interval of 4 hours, the default setting. A scan interval of 2 hours would result in halved numbers of VMs, and an interval of 8 hours would result in doubled numbers of VMs. -
VM index report rate limiter: RHACS is protected against excessive load by a rate limiter, which rejects index reports sent by VMs when their rate exceeds the maximum value. If you see administration events indicating that VM index reports are being rate limited and you have increased system resources, adjust the rate limit by using the
ROX_VM_INDEX_REPORT_RATE_LIMITenvironment variable. The rate limiter allows bursts according to theROX_VM_INDEX_REPORT_BUCKET_CAPACITYenvironment variable, which can be increased to allow larger bursts if the Central pod has enough available memory. For more information, see "Advanced virtual machine scanning configuration" in "Scanning virtual machines". - Number of packages installed in each VM: Increased numbers of packages result in increased scan times, and therefore reduce throughput.
- Vulnerabilities in installed packages: Greater numbers of overall vulnerabilities found in VMs result in longer scan times, reducing throughput.
Other workloads: While RHACS processes VMs, RHACS components keep processing the rest of the usual workloads. These components include Scanner V4, and therefore such workloads reduce the VM scanning throughput.
To account for these factors and their variability, the suggested capacity for each deployment type includes a headroom of at least 100%. Therefore, even if the processing time doubles due to an increased number of packages or vulnerabilities, RHACS can handle the specified number of VMs.