Chapter 4. Determining Hardware and OS Configuration

download PDF
The more physical cores that are available to Satellite, the higher throughput can be achieved for the tasks. Some of the Satellite components such as Puppet and PostgreSQL are CPU intensive applications and can really benefit from the higher number of available CPU cores.
The higher amount of memory available in the system running Satellite, the better will be the response times for the Satellite operations. Since Satellite uses PostgreSQL as the database solutions, any additional memory coupled with the tunings will provide a boost to the response times of the applications due to increased data retention in the memory.
With Satellite doing heavy IOPS due to repository synchronizations, package data retrieval, high frequency database updates for the subscription records of the content hosts, it is advised that Satellite be installed on a high speed SSD so as to avoid performance bottlenecks which may happen due to increased disk reads or writes. Satellite requires disk IO to be at or above 60 – 80 megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the Satellite. Satellite components such as PostgreSQL benefit from using SSDs due to their lower latency compared to HDDs.
The communication between the Satellite Server and Capsules is impacted by the network performance. A decent network with a minimum jitter and low latency is required to enable hassle free operations such as Satellite Server and Capsules synchronization (at least ensure it is not causing connection resets, etc).
Server Power Management
Your server by default is likely to be configured to conserve power. While this is a good approach to keep the max power consumption in check, it also has a side effect of lowering the performance that Satellite may be able to achieve. For a server running Satellite, it is recommended to set the BIOS to enable the system to be run in performance mode to boost the maximum performance levels that Satellite can achieve.

4.1. Benchmarking Disk Performance

We are working to update satellite-maintain to only warn the users when its internal quick storage benchmark results in numbers below our recommended throughput.

Also working on an updated benchmark script you can run (which will likely be integrated into satellite-maintain in the future) to get a more accurate real-world storage information.

  • You may have to temporarily reduce the RAM in order to run the I/O benchmark. For example, if your Satellite Server has 256 GiB RAM, tests would require 512 GiB of storage to run. As a workaround, you can add mem=20G kernel option in grub during system boot to temporary reduce the size of the RAM. The benchmark creates a file twice the size of the RAM in the specified directory and executes a series of storage I/O tests against it. The size of the file ensures that the test is not just testing the filesystem caching. If you benchmark other filesystems, for example smaller volumes such as PostgreSQL storage, you might have to reduce the RAM size as described above.
  • If you are using different storage solutions such as SAN or iSCSI, you can expect a different performance.
  • Red Hat recommends you to stop all services before executing this script and you will be prompted to do so.

This test does not use direct I/O and will utilize file caching as normal operations would.

You can find our first version of the script storage-benchmark. To execute it, just download the script to your Satellite, make it executable, and run:

# ./storage-benchmark /var/lib/pulp

As noted in the README block in the script, generally you wish to see on average 100MB/sec or higher in the tests below:

  • Local SSD based storage should give values of 600MB/sec or higher.
  • Spinning disks should give values in the range of 100 – 200MB/sec or higher.

If you see values below this, please open a support ticket for assistance.

For more information, see Impact of Disk Speed on Satellite Operations.

4.2. Enabling Tuned Profiles

Red Hat Enterprise Linux 8 enables the tuned daemon by default during installation. On bare-metal, Red Hat recommends to run the throughput-performance tuned profile on Satellite Server and Capsules. On virtual machines, Red Hat recommends to run the virtual-guest profile.


  1. Check if tuned is running:

    # systemctl status tuned
  2. If tuned is not running, enable it:

    # systemctl enable --now tuned
  3. Optional: View a list of available tuned profiles:

    # tuned-adm list
  4. Enable a tuned profile depending on your scenario:

    # tuned-adm profile "My_Tuned_Profile"

4.3. Disable Transparent Hugepage

Transparent Hugepage is a memory management technique used by the Linux kernel to reduce the overhead of using the Translation Lookaside Buffer (TLB) by using larger sized memory pages. Due to databases having Sparse Memory Access patterns instead of Contiguous Memory access patterns, database workloads often perform poorly when Transparent Hugepage is enabled. To improve PostgreSQL and Redis performance, disable Transparent Hugepage. In deployments where the databases are running on separate servers, there may be a small benefit to using Transparent Hugepage on the Satellite Server only.

For more information on how to disable Transparent Hugepage, see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux.

Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.