Este conteúdo não está disponível no idioma selecionado.
Chapter 1. Hardware platforms for RHEL for Real Time
Configuring the hardware correctly plays a critical role in setting up the real-time environment because hardware impacts the way your system operates. Not all hardware platforms are real-time capable and enable fine tuning. Before performing fine tuning, you must ensure that the potential hardware platform is real-time capable.
Hardware platforms vary based on the vendor. You can test and verify the hardware suitability for real-time with the hardware latency detector (hwlatdetect) program. The program controls the latency detector kernel module and helps to detect latencies caused by underlying hardware or firmware behavior.
Before configuring hardware platforms for RHEL for Real Time, ensure that the RHEL-RT packages are installed and that any tuning steps required for low latency operation are complete. Refer to the vendor documentation for instructions to reduce or remove any System Management Interrupts (SMIs) that make the system move to System Management Mode (SMM).
You must avoid disabling System Management Interrupts (SMIs) completely because it can result in catastrophic hardware failures.
1.1. Processor cores Copiar o linkLink copiado para a área de transferência!
A real-time processor core is a physical Central Processing Unit (CPU) that executes the machine code. A socket is the connection between the processor and the system board. Processors can be single-core (one socket with one core) or multi-core such as quad-core (one socket with four cores).
When designing a real time environment, be aware of the number of available cores, the cache layout among cores, and how the cores are physically connected.
When multiple cores are available, use threads or processes. A program when written without using these constructs, runs on a single processor at a time. A multi-core platform provides advantages through using different cores for different types of operations.
1.1.1. Caches Copiar o linkLink copiado para a área de transferência!
Caches have a noticeable impact on overall processing time and determinism. Often, the threads of an application need to synchronize access to a shared resource, such as a data structure.
With the tuna command line tool (CLI), you can determine the cache layout and bind interacting threads to a core so that they share the cache. Cache sharing reduces memory faults by ensuring that the mutual exclusion primitive (mutex, condition variables, or similar) and the data structure use the same cache.
1.1.2. Interconnects Copiar o linkLink copiado para a área de transferência!
Increasing the number of cores on systems can cause conflicting demands on the interconnects. This makes it necessary to determine the interconnect topology to help detect the conflicts that occur between the cores on real-time systems.
Many hardware vendors now provide a transparent network of interconnects between cores and memory, known as Non-uniform memory access (NUMA) architecture.
NUMA is a system memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. When you use NUMA, a processor can access its own local memory faster than non-local memory, such as memory on another processor or memory shared between processors. On NUMA systems, understanding the interconnect topology helps to place threads that communicate frequently on adjacent cores.
The taskset and numactl utilities determine the CPU topology. taskset defines the CPU affinity without NUMA resources such as memory nodes and numactl controls the NUMA policy for processes and shared memory.