Chapter 2. Important Changes to External Kernel Parameters


This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 6.8. These changes include added or updated proc entries, sysctl, and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes.
force_hrtimer_reprogram [KNL]
Force the reprogramming of expired timers in the hrtimer_reprogram() function.
softirq_2ms_loop [KNL]
Set softirq handling to 2 ms maximum. The default time is the existing Red Hat Enterprise Linux 6 behaviour.
tpm_suspend_pcr=[HW,TPM]
Specify that, at suspend time, the tpm driver should extend the specified principal components regression (PCR) with zeros as a workaround for some chips which fail to flush the last written PCR on a TPM_SaveState operation. This guarantees that all the other PCRs are saved.
Format: integer pcr id
/proc/fs/fscache/stats
Table 2.1. class Ops:
new:ini=NNumber of async ops initialised
changed:rel=Nwill be equal to ini=N when idle
Table 2.2. new class CacheEv
nsp=NNumber of object lookups or creations rejected due to a lack of space
stl=NNumber of stale objects deleted
rtr=NNumber of objects retired when relinquished
cul=NNumber of objects culled
/proc/sys/net/core/default_qdisc
The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters, it is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue (sfq). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin, which require setting up classes and bandwidths.
Default: pfifo_fast
/sys/kernel/mm/ksm/max_page_sharing
Maximum sharing allowed for each KSM page. This enforces a deduplication limit to avoid the virtual memory rmap lists to grow too large. The minimum value is 2 as a newly created KSM page will have at least two sharers. The rmap walk has O(N) complexity where N is the number of rmap_items, that is virtual mappings that are sharing the page, which is in turn capped by max_page_sharing. So this effectively spreads the linear O(N) computational complexity from rmap walk context over different KSM pages. The ksmd walk over the stable_node chains is also O(N), but N is the number of stable_node dups, not the number of rmap_items, so it has not a significant impact on ksmd performance. In practice the best stable_node dups candidate is kept and found at the head of the dups list. The higher this value the faster KSM merges the memory, because there will be fewer stable_node dups queued into the stable_node chain->hlist to check for pruning. And the higher the deduplication factor is, but the slowest the worst case rmap walk could be for any given KSM page. Slowing down the rmap walk means there will be higher latency for certain virtual memory operations happening during swapping, compaction, NUMA balancing, and page migration, in turn decreasing responsiveness for the caller of those virtual memory operations. The scheduler latency of other tasks not involved with the VM operations doing the rmap walk is not affected by this parameter as the rmap walks are always scheduled friendly themselves.
/proc/sys/net/core/default_qdisc
The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters so is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue (sfq). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin which require setting up classes and bandwidths.
Default: pfifo_fast
/sys/kernel/mm/ksm/stable_node_chains_prune_millisecs
How frequently to walk the whole list of stable_node "dups" linked in the stable_node chains in order to prune stale stable_node. Smaller milllisecs values will free up the KSM metadata with lower latency, but they will make ksmd use more CPU during the scan. This only applies to the stable_node chains so it is a noop unless a single KSM page hits max_page_sharing. In such a case there are no stable_node chains.
/sys/kernel/mm/ksm/stable_node_chains
Number of stable node chains allocated. this is effectively the number of KSM pages that hit the max_page_sharing limit.
/sys/kernel/mm/ksm/stable_node_dups
Number of stable node dups queued into the stable_node chains.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat, Inc.