3.2. Telco RAN DU 参考规格


3.2.1. Telco RAN DU 4.17 参考设计概述

Telco RAN 分布式单元(DU) 4.17 参考设计配置在商业硬件上运行的 OpenShift Container Platform 4.17 集群,以托管 Telco RAN DU 工作负载。它捕获了推荐的、经过测试和支持的配置,以便为运行电信 RAN DU 配置集的集群获取可靠和可重复的性能。

3.2.1.1. 部署架构概述

您可以从集中管理的 RHACM hub 集群将 Telco RAN DU 4.17 引用配置部署到受管集群。参考设计规格 (RDS) 包括受管集群的配置和 hub 集群组件的配置。

图 3.1. Telco RAN DU 部署架构概述

显示两个不同网络边缘部署过程图

3.2.2. Telco RAN DU 使用模型概述

使用以下信息来计划 hub 集群和管理的单节点 OpenShift 集群的电信 RAN DU 工作负载、集群资源和硬件规格。

3.2.2.1. Telco RAN DU 应用程序工作负载

DU worker 节点必须具有 3rd Generation Xeon (Ice Lake) 2.20 GHz 或更好的 CPU,并使用固件调优以获得最大性能。

5g RAN DU 用户应用程序和工作负载应符合以下最佳实践和应用程序限制:

  • 开发符合 Kubernetes 红帽最佳实践的最新版本的云原生网络功能 (CNF)。
  • 使用 SR-IOV 进行高性能网络。
  • 使用 exec probe 静默,且仅在没有其他合适的选项时才使用

    • 如果 CNF 使用 CPU 固定,则不要使用 exec 探测。使用其他探测实施,如 httpGettcpSocket
    • 当您需要使用 exec 探测时,限制 exec 探测频率和数量。exec 探测的最大数量必须保持在 10 以下,且频率不得小于 10 秒。
  • 避免使用 exec 探测,除非绝对没有可行的替代选择。

    注意

    在 steady-state 操作过程中启动探测需要最少的资源。exec 探测的限制主要适用于存活度和就绪度探测。

符合参考 DU 应用程序工作负载的尺寸测试工作负载可在 openshift-kni/du-test-workloads 中找到。

3.2.2.2. Telco RAN DU 代表引用应用程序工作负载特征

代表引用应用程序工作负载有以下特征:

  • vRAN 应用最多 15 个 pod 和 30 个容器,包括其管理和控制功能
  • 每个 pod 最多使用 2 个 ConfigMap 和 4 个 Secret CR
  • 使用最多 10 个 exec 探测,其频率小于 10 秒
  • kube-apiserver 上的增量应用程序负载小于集群平台用量的 10%

    注意

    您可以从平台指标中提取 CPU 负载。例如:

    query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m])
  • 平台日志收集器不会收集应用程序日志
  • 主 CNI 上的聚合流量小于 1 MBps

3.2.2.3. Telco RAN DU worker 节点集群资源使用率

系统上运行的最大 pod 数量(包括应用程序工作负载和 OpenShift Container Platform pod)是 120。

资源利用率

OpenShift Container Platform 资源利用率根据包括应用程序工作负载特性的许多因素而有所不同,例如:

  • Pod 数量
  • 探测的类型和频率
  • 带有内核网络的主 CNI 或二级 CNI 的消息传递率
  • API 访问率
  • 日志记录率
  • 存储 IOPS

在以下情况下,集群资源要求适用:

  • 集群正在运行描述的代表应用程序工作负载。
  • 集群使用 "Telco RAN DU worker 节点集群资源 utilization" 中描述的约束来管理。
  • 在 RAN DU 中使用模型配置中作为可选组件不会被应用。
重要

您需要进行额外的分析,以确定资源利用率和功能在 Telco RAN DU 参考设计范围之外的配置满足 KPI 目标的影响。您可能必须根据要求在集群中分配其他资源。

3.2.2.4. hub 集群管理特征

Red Hat Advanced Cluster Management (RHACM) 是推荐的集群管理解决方案。将其配置为 hub 集群的以下限制:

  • 配置最多 5 个 RHACM 策略,其合规评估间隔至少为 10 分钟。
  • 在策略中最多使用 10 个受管集群模板。在可能的情况下,使用 hub-side 模版。
  • 禁用除 policy-controllerobservability-controller 附加组件外的所有 RHACM 附加组件。将 Observability 设置为默认配置。

    重要

    配置可选组件或启用附加功能将导致额外的资源使用量,并降低整体系统性能。

    如需更多信息,请参阅参考设计部署组件

表 3.1. 引用应用程序负载下的 OpenShift 平台资源利用率
指标限制

CPU 用量

少于 4000 mc - 2 个内核(4 超线程)

平台 CPU 固定到保留内核,包括每个保留内核中的超线程。系统设计为使用 steady-state 的 3 个 CPU (3000mc),以允许定期的系统任务和激增。

使用的内存

少于 16G

 

3.2.2.5. Telco RAN DU RDS 组件

以下小节描述了用于配置和部署集群来运行电信 RAN DU 工作负载的各种 OpenShift Container Platform 组件和配置。

图 3.2. Telco RAN DU 参考组件

描述电信 RAN DU 组件堆栈的图表
注意

确保未包含在电信 RAN DU 配置集中的组件不会影响分配给工作负载应用程序的 CPU 资源。

重要

不支持在树外驱动程序中。

其他资源

3.2.3. Telco RAN DU 4.17 参考组件

以下小节描述了用于配置和部署集群来运行 RAN DU 工作负载的各种 OpenShift Container Platform 组件和配置。

3.2.3.1. 主机固件调整

这个版本中的新内容
  • 现在,您可以为使用 GitOps ZTP 部署的受管集群配置主机固件设置。
描述
在初始集群部署期间调优主机固件设置以获得最佳的性能。受管集群主机固件设置在 hub 集群中提供,作为使用 SiteConfig CR 和 GitOps ZTP 部署受管集群时创建的 BareMetalHost 自定义资源 (CR)。
限制和要求
  • 必须启用超线程
工程考虑
  • 调优所有设置以获得最佳性能。
  • 除非需要针对节能进行调优,否则所有设置都是针对实现最大性能而设计的。
  • 如果需要,您可以针对实现节能目的而对主机固件进行调优,这会牺牲性能。
  • 启用安全引导。启用安全引导后,内核只载入签名的内核模块。不支持树外驱动程序。

3.2.3.2. Node Tuning Operator

这个版本中的新内容
  • 这个版本没有参考设计更新
描述

您可以通过创建性能配置集来调整集群性能。

重要

RAN DU 用例需要针对低延迟性能调整集群。

限制和要求

Node Tuning Operator 使用 PerformanceProfile CR 来配置集群。您需要在 RAN DU 配置集 PerformanceProfile CR 中配置以下设置:

  • 选择保留和隔离内核,并确保在 Intel 3rd Generation Xeon (Ice Lake) 2.20 GHz CPU 上至少分配 4 个超线程(等同于 2 个内核)。
  • 将保留的 cpuset 设置为包括每个包含的内核的超线程同级功能。Unreserved 内核可作为可分配 CPU 用于调度工作负载。确保超线程不会跨保留和隔离的内核进行分割。
  • 根据您设置为保留和隔离的 CPU,将保留和隔离的 CPU 配置为包括所有内核中的所有线程。
  • 设置要包含在保留 CPU 集中的每个 NUMA 节点的核心 0。
  • 将巨页大小设置为 1G。
注意

您不应该在管理分区中添加额外的工作负载。只有作为 OpenShift 管理平台一部分的 pod 才应标注为管理分区。

工程考虑
  • 您应该使用 RT 内核来满足性能要求。但是,如果需要,您可以使用非 RT 内核,这会对集群性能产生相应的影响。
  • 您配置的巨页数量取决于应用程序工作负载要求。这个参数中的变化是正常的,并允许。
  • 根据所选硬件和系统中使用的其他组件,预计在保留和隔离的 CPU 集的配置中有变化。变体必须仍然符合指定的限制。
  • 没有 IRQ 关联性的硬件会影响隔离的 CPU。为确保具有保证整个 CPU QoS 的 pod 完全使用分配的 CPU,服务器中的所有硬件都必须支持 IRQ 关联性。如需更多信息,请参阅"查找节点的有效 IRQ 关联性设置"。

当您使用 cpuPartitioningMode: AllNodes 设置在集群部署期间启用工作负载分区时,PerformanceProfile CR 中设置保留的 CPU 必须包含足够的 CPU 用于操作系统、中断和 OpenShift 平台 pod。

重要

cgroup v1 是一个已弃用的功能。弃用的功能仍然包含在 OpenShift Container Platform 中,并将继续被支持。但是,这个功能会在以后的发行版本中被删除,且不建议在新的部署中使用。

有关 OpenShift Container Platform 中已弃用或删除的主要功能的最新列表,请参阅 OpenShift Container Platform 发行注记中已弃用和删除的功能部分。

3.2.3.3. PTP Operator

这个版本中的新内容
  • 提供了 Precision Time Protocol (PTP) 快速事件 REST API 的新版本。消费者应用程序现在可以直接订阅 PTP 事件制作者 sidecar 中的事件 REST API。PTP 快速事件 REST API v2 与 Event Consumers 3.0 的 O-RAN O-Cloud Notification API 规格 兼容。您可以通过在 PtpOperatorConfig 资源中设置 ptpEventConfig.apiVersion 字段来更改 API 版本。
描述

如需了解集群节点的支持和配置 PTP 的详情,请参阅"为 vDU 应用程序工作负载的单节点 OpenShift 集群配置"。DU 节点可在以下模式下运行:

  • 作为常规时钟 (OC) 同步至 grandmaster 时钟或边界时钟 (T-BC)
  • 作为从 GPS 进行同步的 grandmaster 时钟 (T-GM),支持单或双卡 E810 NIC。
  • 作为双边界时钟(每个 NIC 对应一个),支持 E810 NIC。
  • 当在不同 NIC 上有多个时间源时,作为具有高可用性 (HA) 系统时钟的 T-BC。
  • 可选:作为无线单元 (RU) 的边界时钟。
限制和要求
  • 限制为双 NIC 和 HA 的两个边界时钟。
  • 限制为 T-GM 的两个卡 E810 配置。
工程考虑
  • 为普通时钟、边界时钟、具有高可用性系统时钟的边界时钟、和 grandmaster 时钟提供的配置。
  • PTP 快速事件通知使用 ConfigMap CR 存储 PTP 事件订阅
  • PTP 事件 REST API v2 没有资源路径中包含的所有较低层次结构资源的全局订阅。您可以将消费者应用程序订阅到不同的可用事件类型。

3.2.3.4. SR-IOV Operator

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
SR-IOV Operator 置备并配置 SR-IOV CNI 和设备插件。netdevice (内核 VF)和 vfio (DPDK) 设备都被支持,并适用于 RAN 使用模型。
限制和要求
  • 使用 OpenShift Container Platform 支持的设备
  • BIOS 中的 SR-IOV 和 IOMMU 启用 :SR-IOV Network Operator 将在内核命令行中自动启用 IOMMU。
  • SR-IOV VF 不会从 PF 接收链路状态更新。如果需要链接检测,则必须在协议级别进行配置。
  • 不支持使用安全引导或内核锁定进行固件更新的 NIC 必须预先配置足够的虚拟功能(VF),以支持应用程序工作负载所需的 VF 数量。

    注意

    您可能需要使用没有在文档中记录的 disablePlugins 选项为不支持的 NIC 禁用 SR-IOV Operator 插件。

工程考虑
  • 带有 vfio 驱动程序类型的 SR-IOV 接口通常用于为需要高吞吐量或低延迟的应用程序启用额外的二级网络。
  • 期望客户对 SriovNetworkSriovNetworkNodePolicy 自定义资源 (CR) 的配置和数量变化。
  • IOMMU 内核命令行设置会在安装时使用 MachineConfig CR 应用。这样可确保 SriovOperator CR 在添加节点时不会导致节点重启。
  • SR-IOV 支持并行排空节点,不适用于单节点 OpenShift 集群。
  • 如果您从部署中排除 SriovOperatorConfig CR,则不会自动创建 CR。
  • 如果您将工作负载固定到特定的节点,SR-IOV 并行节点排空功能不会重新调度 pod。在这些情况下,SR-IOV Operator 会禁用并行节点排空功能。

3.2.3.5. 日志记录

这个版本中的新内容
  • Cluster Logging Operator 6.0 在此发行版本中是新的。更新您现有的实施,以适应 API 的新版本。
描述
使用日志记录从最边缘节点收集日志进行远程分析。推荐的日志收集器是 Vector。
工程考虑
  • 例如,处理基础架构和审计日志以外的日志,例如,应用程序工作负载会根据额外的日志记录率需要额外的 CPU 和网络带宽。
  • 从 OpenShift Container Platform 4.14 开始,Vector 是引用日志收集器。

    注意

    在 RAN 使用模型中使用 fluentd 已被弃用。

其他资源

3.2.3.6. SRIOV-FEC Operator

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
SRIOV-FEC Operator 是一个可选的第三方认证 Operator,支持 FEC 加速器硬件。
限制和要求
  • 从 FEC Operator v2.7.0 开始:

    • SecureBoot 支持
    • PFvfio 驱动程序需要使用 vfio-token 注入 Pod。pod 中的应用程序可以使用 EAL 参数 --vfio-vf-tokenVF 令牌传递给 DPDK。
工程考虑
  • SRIOV-FEC Operator 使用 isolated CPU 集合的 CPU 内核。
  • 您可以作为应用程序部署的预检查的一部分来验证 FEC 就绪,例如通过扩展验证策略。

3.2.3.7. 生命周期代理

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
Lifecycle Agent 为单节点 OpenShift 集群提供本地生命周期管理服务。
限制和要求
  • Lifecycle Agent 不适用于带有额外 worker 的多节点集群或单节点 OpenShift 集群。
  • 需要一个在安装集群时创建的持久性卷。对于分区要求,请参阅"使用 GitOps ZTP 时,在 ostree stateroots 之间配置共享目录"。

3.2.3.8. Local Storage Operator

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
您可以使用 Local Storage Operator 创建可用作 PVC 资源的持久性卷。您创建的 PV 资源的数量和类型取决于您的要求。
工程考虑
  • 在创建 PV 之前,为 PV CR 创建后备存储。这可以是分区、本地卷、LVM 卷或完整磁盘。
  • 请参阅 LocalVolume CR 中的设备列表,访问每个设备,以确保正确分配磁盘和分区。无法保证在节点重启后逻辑名称(例如 /dev/sda)一致。

    如需更多信息,请参阅有关设备标识符的 RHEL 9 文档

3.2.3.9. LVM 存储

这个版本中的新内容
  • 这个版本没有参考设计更新
注意

逻辑卷管理器 (LVM) 存储是一个可选组件。

当使用 LVM Storage 作为存储解决方案时,它会替换 Local Storage Operator。CPU 资源作为平台开销分配给管理分区。参考配置必须包含这些存储解决方案中的一个,但不能同时包含这两个解决方案。

描述
LVM Storage 提供块和文件存储的动态置备。LVM 存储从本地设备创建逻辑卷,这些逻辑卷可由应用程序用作 PVC 资源。也可以进行卷扩展和快照。
限制和要求
  • 在单节点 OpenShift 集群中,持久性存储必须由 LVM Storage 或本地存储提供,不能同时由这两个存储提供。
  • 参考配置中排除卷快照。
工程考虑
  • LVM 存储可用作 RAN DU 用例的本地存储实现。当 LVM 存储用作存储解决方案时,它会替换 Local Storage Operator,并将所需的 CPU 分配给管理分区作为平台开销。参考配置必须包含这些存储解决方案中的一个,但不能同时包含这两个解决方案。
  • 确保有足够的磁盘或分区来满足存储要求。

3.2.3.10. 工作负载分区

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
工作负载分区将作为 DU 配置集一部分的 OpenShift 平台和第 2 天 Operator pod 固定到保留的 CPU 集,并从节点核算中删除保留的 CPU。这会保留所有非保留 CPU 内核供用户工作负载使用。
限制和要求
  • 必须注解 NamespacePod CR,以允许将 pod 应用到管理分区
  • 具有 CPU 限制的 Pod 无法分配给分区。这是因为 mutation 可以更改 pod QoS。
  • 有关可分配给管理分区的最小 CPU 数量的更多信息,请参阅 Node Tuning Operator
工程考虑
  • 工作负载分区将所有管理 pod 固定到保留内核。必须将足够数量的内核分配给保留集以考虑操作系统、管理 pod,以及工作负载启动时发生 CPU 使用的预期激增、节点重启或其他系统事件。

其他资源

3.2.3.11. 集群调整

这个版本中的新内容
  • 这个版本没有参考设计更新
描述
如需了解您可以在安装前启用或禁用的可选组件的完整列表,请参阅"集群功能"。
限制和要求
  • 集群功能不适用于安装程序置备的安装方法。
  • 您必须应用所有平台调优配置。下表列出了所需的平台调优配置:

    表 3.2. 集群功能配置
    功能描述

    删除可选集群功能

    通过在单节点 OpenShift 集群上禁用可选集群 Operator 来减少 OpenShift Container Platform 占用空间。

    • 删除除 Marketplace 和 Node Tuning Operator 以外的所有可选 Operator。

    配置集群监控

    通过执行以下操作配置监控堆栈以减少占用空间:

    • 禁用本地 alertmanagertelemeter 组件。
    • 如果使用 RHACM observability,则必须与适当的 additionalAlertManagerConfigs CR 增强,才能将警报转发到 hub 集群。
    • Prometheus 保留周期减少 24h。

      注意

      RHACM hub 集群聚合受管集群指标。

    禁用网络诊断

    为单节点 OpenShift 禁用网络诊断,因为它们不是必需的。

    配置单个 OperatorHub 目录源

    将集群配置为使用单个目录源,它只包含 RAN DU 部署所需的 Operator。每个目录源会增加集群中的 CPU 使用量。使用单个 CatalogSource 适合平台 CPU 预算。

    禁用 Console Operator

    如果集群在禁用控制台的情况下部署,则不需要 Console CR (ConsoleOperatorDisable.yaml)。如果集群是在启用了控制台的情况下部署的,则必须应用 Console CR。

工程考虑
  • 在 OpenShift Container Platform 4.16 及更高版本中,当应用 PerformanceProfile CR 时,集群不会自动恢复到 cgroup v1。如果集群中运行的工作负载需要 cgroups v1,则需要将集群配置为使用 cgroups v1。

    注意

    如果您需要配置 cgroup v1,请将配置作为初始集群部署的一部分。

其他资源

3.2.3.12. 机器配置

这个版本中的新内容
  • 这个版本没有参考设计更新
限制和要求
  • CRI-O 擦除禁用 MachineConfig 假设磁盘上的镜像是静态的镜像,而不是在定义的维护窗口中调度的维护期间使用。为确保镜像是静态的,请不要将 pod imagePullPolicy 字段设置为 Always

    表 3.3. 机器配置选项
    功能描述

    容器运行时

    将所有节点角色的容器运行时设为 crun

    kubelet 配置和容器挂载隐藏

    减少 kubelet 内务处理和驱除监控的频率,以减少 CPU 用量。创建容器挂载命名空间,对 kubelet 和 CRI-O 可见,以减少系统挂载扫描资源使用情况。

    SCTP

    可选配置(默认为启用)启用 SCTP。RAN 应用程序需要 SCTP,但在 RHCOS 中默认禁用。

    kdump

    可选配置(默认启用)启用 kdump 在内核 panic 发生时捕获调试信息。

    注意

    启用 kdump 的参考 CR 根据参考配置中包含的驱动程序和内核模块集合增加内存保留。

    CRI-O 擦除禁用

    在未清除关闭后禁用 CRI-O 镜像缓存的自动擦除。

    与 SR-IOV 相关的内核参数

    在内核命令行中包括额外的 SR-IOV 相关参数。

    RCU Normal systemd 服务

    在系统完全启动后设置 rcu_normal

    一次性时间同步

    为 control plane 或 worker 节点运行一次性 NTP 系统时间同步作业。

3.2.3.13. Telco RAN DU 部署组件

以下小节描述了您使用 Red Hat Advanced Cluster Management (RHACM) 配置 hub 集群的各种 OpenShift Container Platform 组件和配置。

3.2.3.13.1. Red Hat Advanced Cluster Management
这个版本中的新内容
  • 这个版本没有参考设计更新
描述

Red Hat Advanced Cluster Management (RHACM) 为部署的集群提供 Multi Cluster Engine (MCE) 安装和持续生命周期管理功能。您可以通过在维护窗口期间将 Policy 自定义资源 (CR) 应用到集群来声明管理集群配置和升级。

您可以使用 RHACM 策略控制器应用策略,如 Topology Aware Lifecycle Manager (TALM)。策略控制器处理配置、升级和集群状态。

安装受管集群时,RHACM 将标签和初始 ignition 配置应用到各个节点,以支持自定义磁盘分区、分配角色和分配到机器配置池。您可以使用 SiteConfigClusterInstance CR 定义这些配置。

限制和要求
  • 每个 ArgoCD 应用程序 300 个 SiteConfig CR。您可以使用多个应用程序来实现单个 hub 集群支持的最大集群数量。
  • 单个 hub 集群支持最多 3500 部署的单节点 OpenShift 集群,其中包含绑定到每个集群的 5 个 Policy CR。
工程考虑
  • 使用 RHACM 策略 hub 侧模板来更好地扩展集群配置。您可以使用单个组策略或少量常规组策略(其中组和每个集群值替换)来显著减少策略数量。
  • 集群特定的配置:受管集群通常具有一些特定于单个集群的配置值。这些配置应该使用 RHACM 策略 hub 侧模板来管理,其值基于集群名称从 ConfigMap CR 中拉取。
  • 要在受管集群中保存 CPU 资源,在集群安装 GitOps ZTP 后,应用静态配置的策略应该从受管集群绑定。
3.2.3.13.2. Topology Aware Lifecycle Manager
这个版本中的新内容
  • 这个版本没有参考设计更新
描述
Topology Aware Lifecycle Manager (TALM)是一个仅在 hub 集群上运行的 Operator,用于管理集群和 Operator 升级、配置等变化是如何部署到网络中。
限制和要求
  • TALM 支持以 400 批量进行并发集群部署。
  • 预缓存和备份功能仅适用于单节点 OpenShift 集群。
工程考虑
  • 在初始集群安装过程中,只有带有 ran.openshift.io/ztp-deploy-wave 注解的策略才会由 TALM 自动应用。
  • 您可以创建进一步的 ClusterGroupUpgrade CR,以控制 TALM 修复的策略。
3.2.3.13.3. GitOps 和 GitOps ZTP 插件
这个版本中的新内容
  • 这个版本没有参考设计更新
描述

GitOps 和 GitOps ZTP 插件提供了一个基于 GitOps 的基础架构,用于管理集群部署和配置。集群定义和配置在 Git 中作为声明状态进行维护。您可以将 ClusterInstance CR 应用到 hub 集群,其中 SiteConfig Operator 会将它们呈现为安装 CR。另外,您可以使用 GitOps ZTP 插件直接从 SiteConfig CR 生成安装 CR。GitOps ZTP 插件支持根据 PolicyGenTemplate CR 在策略中自动嵌套配置 CR。

注意

您可以使用基准引用配置 CR 在受管集群中部署和管理多个 OpenShift Container Platform 版本。您可以使用自定义 CR 和基线(baseline)CR。

要同时维护多个针对每个版本的策略,请使用 Git 管理源 CR 和策略 CR (PolicyGenTemplatePolicyGenerator) 的版本。

将引用 CR 和自定义 CR 保留在不同的目录中。这样,您可以通过简单的替换所有目录内容来修补和更新引用 CR,而无需涉及自定义 CR。

Limits
  • 每个 ArgoCD 应用程序 300 个 SiteConfig CR。您可以使用多个应用程序来实现单个 hub 集群支持的最大集群数量。
  • Git 中的 /source-crs 文件夹的内容会覆盖 GitOps ZTP 插件容器中提供的内容。Git 在搜索路径中具有优先权。
  • 在与 kustomization.yaml 文件相同的目录中添加 /source-crs 文件夹,其中包含 PolicyGenTemplate 作为生成器。

    注意

    此上下文中不支持 /source-crs 目录的备用位置。

  • SiteConfig CR 的 extraManifestPath 字段已从 OpenShift Container Platform 4.15 及之后的版本中弃用。使用新的 extraManifests.searchPaths 字段替代。
工程考虑
  • 对于多节点集群升级,您可以通过将 paused 字段设置为 true,在维护窗口期间暂停 MachineConfigPool (MCP) CR。您可以通过在 MCP CR 中配置 maxUnavailable 设置来增加每个 MCP 更新的节点数量。MaxUnavailable 字段定义了,在 MachineConfig 更新期间,池中可以同时不可用的节点的百分比。将 maxUnavailable 设置为最大可容忍的值。这可减少升级过程中的重启次数,从而缩短升级时间。当您最终取消暂停 MCP CR 时,所有更改的配置都会使用单个重启来应用。
  • 在集群安装过程中,您可以通过将 paused 字段设置为 true 并将 maxUnavailable 设置为 100% 以改进安装时间来暂停自定义 MCP CR。
  • 为了避免在更新内容时避免混淆或意外覆盖文件,请在 /source-crs 文件夹和 Git 中额外清单中使用唯一的和可分辨名称。
  • SiteConfig CR 允许多个 extra-manifest 路径。当在多个目录路径中找到具有相同名称的文件时,找到的最后一个文件将具有优先权。这可让您将特定于版本的整个版本 0 清单 (extra-manifests) 放在 Git 中,并从 SiteConfig CR 引用它们。使用此功能,您可以同时将多个 OpenShift Container Platform 版本部署到受管集群。
3.2.3.13.4. 基于代理的安装程序
这个版本中的新内容
  • 这个版本没有参考设计更新
描述

基于代理的安装程序(ABI)提供没有集中基础架构的安装功能。安装程序会创建一个挂载到服务器的 ISO 镜像。当服务器引导时,它会安装 OpenShift Container Platform 并提供额外的清单。

注意

您还可以使用 ABI 在没有 hub 集群的情况下安装 OpenShift Container Platform 集群。以这种方式使用 ABI 时,仍需要镜像 registry。

基于代理的安装程序(ABI)是一个可选组件。

限制和要求
  • 您可在安装时提供一组有限的额外清单。
  • 您必须包含 RAN DU 用例所需的 MachineConfiguration CR。
工程考虑
  • ABI 提供基准 OpenShift Container Platform 安装。
  • 安装后,您要安装第 2 天 Operator 和 RAN DU 用例配置的其余部分。

3.2.4. Telco RAN 分布式单元(DU)参考配置 CR

使用以下自定义资源(CR)使用 Telco RAN DU 配置集配置和部署 OpenShift Container Platform 集群。有些 CR 根据您的要求是可选的。您可以更改的 CR 字段在 CR 中被注解,并带有 YAML 注释。

注意

您可以从 ztp-site-generate 容器镜像中提取一组 RAN DU CR。如需更多信息,请参阅准备 GitOps ZTP 站点配置存储库

3.2.4.1. 第 2 天 Operator 参考 CR

表 3.4. 第 2 天 Operator CR
组件参考 CR选填这个版本中的新内容

集群日志记录

ClusterLogForwarder.yaml

集群日志记录

ClusterLogNS.yaml

集群日志记录

ClusterLogOperGroup.yaml

集群日志记录

ClusterLogServiceAccount.yaml

集群日志记录

ClusterLogServiceAccountAuditBinding.yaml

集群日志记录

ClusterLogServiceAccountInfrastructureBinding.yaml

集群日志记录

ClusterLogSubscription.yaml

Lifecycle Agent Operator

ImageBasedUpgrade.yaml

Lifecycle Agent Operator

LcaSubscription.yaml

Lifecycle Agent Operator

LcaSubscriptionNS.yaml

Lifecycle Agent Operator

LcaSubscriptionOperGroup.yaml

Local Storage Operator

StorageClass.yaml

Local Storage Operator

StorageLV.yaml

Local Storage Operator

StorageNS.yaml

Local Storage Operator

StorageOperGroup.yaml

Local Storage Operator

StorageSubscription.yaml

LVM Operator

LVMOperatorStatus.yaml

LVM Operator

StorageLVMCluster.yaml

LVM Operator

StorageLVMSubscription.yaml

LVM Operator

StorageLVMSubscriptionNS.yaml

LVM Operator

StorageLVMSubscriptionOperGroup.yaml

Node Tuning Operator

PerformanceProfile.yaml

Node Tuning Operator

TunedPerformancePatch.yaml

PTP 快速事件通知

PtpConfigBoundaryForEvent.yaml

PTP 快速事件通知

PtpConfigForHAForEvent.yaml

PTP 快速事件通知

PtpConfigMasterForEvent.yaml

PTP 快速事件通知

PtpConfigSlaveForEvent.yaml

PTP Operator - 高可用性

PtpConfigBoundary.yaml

PTP Operator - 高可用性

PtpConfigForHA.yaml

PTP Operator

PtpConfigDualCardGmWpc.yaml

PTP Operator

PtpConfigGmWpc.yaml

PTP Operator

PtpConfigSlave.yaml

PTP Operator

PtpOperatorConfig.yaml

PTP Operator

PtpOperatorConfigForEvent.yaml

PTP Operator

PtpSubscription.yaml

PTP Operator

PtpSubscriptionNS.yaml

PTP Operator

PtpSubscriptionOperGroup.yaml

SR-IOV FEC Operator

AcceleratorsNS.yaml

SR-IOV FEC Operator

AcceleratorsOperGroup.yaml

SR-IOV FEC Operator

AcceleratorsSubscription.yaml

SR-IOV FEC Operator

SriovFecClusterConfig.yaml

SR-IOV Operator

SriovNetwork.yaml

SR-IOV Operator

SriovNetworkNodePolicy.yaml

SR-IOV Operator

SriovOperatorConfig.yaml

SR-IOV Operator

SriovOperatorConfigForSNO.yaml

SR-IOV Operator

SriovSubscription.yaml

SR-IOV Operator

SriovSubscriptionNS.yaml

SR-IOV Operator

SriovSubscriptionOperGroup.yaml

3.2.4.2. 集群调优参考 CR

表 3.5. 集群调整 CR
组件参考 CR选填这个版本中的新内容

Composable OpenShift

example-sno.yaml

控制台禁用

ConsoleOperatorDisable.yaml

断开连接的 registry

09-openshift-marketplace-ns.yaml

断开连接的 registry

DefaultCatsrc.yaml

断开连接的 registry

DisableOLMPprof.yaml

断开连接的 registry

DisconnectedICSP.yaml

断开连接的 registry

OperatorHub.yaml

单节点 OpenShift 需要 OperatorHub,对于多节点集群是可选的

监控配置

ReduceMonitoringFootprint.yaml

网络诊断禁用

DisableSnoNetworkDiag.yaml

3.2.4.3. 机器配置引用 CR

表 3.6. 机器配置 CR
组件参考 CR选填这个版本中的新内容

容器运行时 (crun)

enable-crun-master.yaml

容器运行时 (crun)

enable-crun-worker.yaml

禁用 CRI-O 擦除

99-crio-disable-wipe-master.yaml

禁用 CRI-O 擦除

99-crio-disable-wipe-worker.yaml

kdump 启用

06-kdump-master.yaml

kdump 启用

06-kdump-worker.yaml

kubelet 配置 / 容器挂载隐藏

01-container-mount-ns-and-kubelet-conf-master.yaml

kubelet 配置 / 容器挂载隐藏

01-container-mount-ns-and-kubelet-conf-worker.yaml

一次性时间同步

99-sync-time-once-master.yaml

一次性时间同步

99-sync-time-once-worker.yaml

SCTP

03-sctp-machine-config-master.yaml

SCTP

03-sctp-machine-config-worker.yaml

设置 RCU normal

08-set-rcu-normal-master.yaml

设置 RCU normal

08-set-rcu-normal-worker.yaml

与 SR-IOV 相关的内核参数

07-sriov-related-kernel-args-master.yaml

与 SR-IOV 相关的内核参数

07-sriov-related-kernel-args-worker.yaml

3.2.4.4. YAML 参考

以下是构成电信 RAN DU 4.17 参考配置的所有自定义资源(CR)的完整参考。

3.2.4.4.1. 第 2 天 Operator 引用 YAML

ClusterLogForwarder.yaml

apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
  annotations: {}
spec:
  # outputs: $outputs
  # pipelines: $pipelines
  serviceAccount:
    name: logcollector
#apiVersion: "observability.openshift.io/v1"
#kind: ClusterLogForwarder
#metadata:
#  name: instance
#  namespace: openshift-logging
# spec:
#   outputs:
#   - type: "kafka"
#     name: kafka-open
#     # below url is an example
#     kafka:
#       url: tcp://10.46.55.190:9092/test
#   filters:
#   - name: test-labels
#     type: openshiftLabels
#     openshiftLabels:
#       label1: test1
#       label2: test2
#       label3: test3
#       label4: test4
#   pipelines:
#   - name: all-to-default
#     inputRefs:
#     - audit
#     - infrastructure
#     filterRefs:
#     - test-labels
#     outputRefs:
#     - kafka-open
#   serviceAccount:
#     name: logcollector

ClusterLogNS.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-logging
  annotations:
    workload.openshift.io/allowed: management

ClusterLogOperGroup.yaml

---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: cluster-logging
  namespace: openshift-logging
  annotations: {}
spec:
  targetNamespaces:
    - openshift-logging

ClusterLogServiceAccount.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: logcollector
  namespace: openshift-logging
  annotations: {}

ClusterLogServiceAccountAuditBinding.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: logcollector-audit-logs-binding
  annotations: {}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: collect-audit-logs
subjects:
  - kind: ServiceAccount
    name: logcollector
    namespace: openshift-logging

ClusterLogServiceAccountInfrastructureBinding.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: logcollector-infrastructure-logs-binding
  annotations: {}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: collect-infrastructure-logs
subjects:
  - kind: ServiceAccount
    name: logcollector
    namespace: openshift-logging

ClusterLogSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: cluster-logging
  namespace: openshift-logging
  annotations: {}
spec:
  channel: "stable-6.0"
  name: cluster-logging
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

ImageBasedUpgrade.yaml

apiVersion: lca.openshift.io/v1
kind: ImageBasedUpgrade
metadata:
  name: upgrade
spec:
  stage: Idle
  # When setting `stage: Prep`, remember to add the seed image reference object below.
  # seedImageRef:
  #   image: $image
  #   version: $version

LcaSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: lifecycle-agent
  namespace: openshift-lifecycle-agent
  annotations: {}
spec:
  channel: "stable"
  name: lifecycle-agent
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

LcaSubscriptionNS.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-lifecycle-agent
  annotations:
    workload.openshift.io/allowed: management
  labels:
    kubernetes.io/metadata.name: openshift-lifecycle-agent

LcaSubscriptionOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: lifecycle-agent
  namespace: openshift-lifecycle-agent
  annotations: {}
spec:
  targetNamespaces:
    - openshift-lifecycle-agent

StorageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations: {}
  name: example-storage-class
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete

StorageLV.yaml

apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks"
  namespace: "openshift-local-storage"
  annotations: {}
spec:
  logLevel: Normal
  managementState: Managed
  storageClassDevices:
    # The list of storage classes and associated devicePaths need to be specified like this example:
    - storageClassName: "example-storage-class"
      volumeMode: Filesystem
      fsType: xfs
      # The below must be adjusted to the hardware.
      # For stability and reliability, it's recommended to use persistent
      # naming conventions for devicePaths, such as /dev/disk/by-path.
      devicePaths:
        - /dev/disk/by-path/pci-0000:05:00.0-nvme-1
#---
## How to verify
## 1. Create a PVC
# apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
#   name: local-pvc-name
# spec:
#   accessModes:
#   - ReadWriteOnce
#   volumeMode: Filesystem
#   resources:
#     requests:
#       storage: 100Gi
#   storageClassName: example-storage-class
#---
## 2. Create a pod that mounts it
# apiVersion: v1
# kind: Pod
# metadata:
#   labels:
#     run: busybox
#   name: busybox
# spec:
#   containers:
#   - image: quay.io/quay/busybox:latest
#     name: busybox
#     resources: {}
#     command: ["/bin/sh", "-c", "sleep infinity"]
#     volumeMounts:
#     - name: local-pvc
#       mountPath: /data
#   volumes:
#   - name: local-pvc
#     persistentVolumeClaim:
#       claimName: local-pvc-name
#   dnsPolicy: ClusterFirst
#   restartPolicy: Always
## 3. Run the pod on the cluster and verify the size and access of the `/data` mount

StorageNS.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-local-storage
  annotations:
    workload.openshift.io/allowed: management

StorageOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: openshift-local-storage
  namespace: openshift-local-storage
  annotations: {}
spec:
  targetNamespaces:
    - openshift-local-storage

StorageSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: local-storage-operator
  namespace: openshift-local-storage
  annotations: {}
spec:
  channel: "stable"
  name: local-storage-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

LVMOperatorStatus.yaml

# This CR verifies the installation/upgrade of the Sriov Network Operator
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
  name: lvms-operator.openshift-storage
  annotations: {}
status:
  components:
    refs:
      - kind: Subscription
        namespace: openshift-storage
        conditions:
          - type: CatalogSourcesUnhealthy
            status: "False"
      - kind: InstallPlan
        namespace: openshift-storage
        conditions:
          - type: Installed
            status: "True"
      - kind: ClusterServiceVersion
        namespace: openshift-storage
        conditions:
          - type: Succeeded
            status: "True"
            reason: InstallSucceeded

StorageLVMCluster.yaml

apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: lvmcluster
  namespace: openshift-storage
  annotations: {}
spec: {}
#example: creating a vg1 volume group leveraging all available disks on the node
#         except the installation disk.
#  storage:
#    deviceClasses:
#    - name: vg1
#      thinPoolConfig:
#        name: thin-pool-1
#        sizePercent: 90
#        overprovisionRatio: 10

StorageLVMSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: lvms-operator
  namespace: openshift-storage
  annotations: {}
spec:
  channel: "stable"
  name: lvms-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

StorageLVMSubscriptionNS.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-storage
  labels:
    workload.openshift.io/allowed: "management"
    openshift.io/cluster-monitoring: "true"
  annotations: {}

StorageLVMSubscriptionOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: lvms-operator-operatorgroup
  namespace: openshift-storage
  annotations: {}
spec:
  targetNamespaces:
    - openshift-storage

PerformanceProfile.yaml

apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
  # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml
  # matches this name: include=openshift-node-performance-${PerformanceProfile.metadata.name}
  # Also in file 'validatorCRs/informDuValidator.yaml':
  # name: 50-performance-${PerformanceProfile.metadata.name}
  name: openshift-node-performance-profile
  annotations:
    ran.openshift.io/reference-configuration: "ran-du.redhat.com"
spec:
  additionalKernelArgs:
    - "rcupdate.rcu_normal_after_boot=0"
    - "efi=runtime"
    - "vfio_pci.enable_sriov=1"
    - "vfio_pci.disable_idle_d3=1"
    - "module_blacklist=irdma"
  cpu:
    isolated: $isolated
    reserved: $reserved
  hugepages:
    defaultHugepagesSize: $defaultHugepagesSize
    pages:
      - size: $size
        count: $count
        node: $node
  machineConfigPoolSelector:
    pools.operator.machineconfiguration.openshift.io/$mcp: ""
  nodeSelector:
    node-role.kubernetes.io/$mcp: ''
  numa:
    topologyPolicy: "restricted"
  # To use the standard (non-realtime) kernel, set enabled to false
  realTimeKernel:
    enabled: true
  workloadHints:
    # WorkloadHints defines the set of upper level flags for different type of workloads.
    # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints
    # for detailed descriptions of each item.
    # The configuration below is set for a low latency, performance mode.
    realTime: true
    highPowerConsumption: false
    perPodPowerManagement: false

TunedPerformancePatch.yaml

apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
  name: performance-patch
  namespace: openshift-cluster-node-tuning-operator
  annotations: {}
spec:
  profile:
    - name: performance-patch
      # Please note:
      # - The 'include' line must match the associated PerformanceProfile name, following below pattern
      #   include=openshift-node-performance-${PerformanceProfile.metadata.name}
      # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from
      #   the [sysctl] section and remove the entire section if it is empty.
      data: |
        [main]
        summary=Configuration changes profile inherited from performance created tuned
        include=openshift-node-performance-openshift-node-performance-profile
        [scheduler]
        group.ice-ptp=0:f:10:*:ice-ptp.*
        group.ice-gnss=0:f:10:*:ice-gnss.*
        group.ice-dplls=0:f:10:*:ice-dplls.*
        [service]
        service.stalld=start,enable
        service.chronyd=stop,disable
  recommend:
    - machineConfigLabels:
        machineconfiguration.openshift.io/role: "$mcp"
      priority: 19
      profile: performance-patch

PtpConfigBoundaryForEvent.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: boundary
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "boundary"
      ptp4lOpts: "-2 --summary_interval -4"
      phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      ptp4lConf: |
        # The interface name is hardware-specific
        [$iface_slave]
        masterOnly 0
        [$iface_master_1]
        masterOnly 1
        [$iface_master_2]
        masterOnly 1
        [$iface_master_3]
        masterOnly 1
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 0
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 248
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval 0
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 135
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type BC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
  recommend:
    - profile: "boundary"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigForHAForEvent.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: boundary-ha
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "boundary-ha"
      ptp4lOpts: ""
      phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
        haProfiles: "$profile1,$profile2"
  recommend:
    - profile: "boundary-ha"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigMasterForEvent.yaml

# The grandmaster profile is provided for testing only
# It is not installed on production clusters
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: grandmaster
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "grandmaster"
      # The interface name is hardware-specific
      interface: $interface
      ptp4lOpts: "-2 --summary_interval -4"
      phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      ptp4lConf: |
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 0
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 255
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval 0
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 7
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type OC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
  recommend:
    - profile: "grandmaster"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigSlaveForEvent.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: du-ptp-slave
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "slave"
      # The interface name is hardware-specific
      interface: $interface
      ptp4lOpts: "-2 -s --summary_interval -4"
      phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      ptp4lConf: |
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 1
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 255
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval 0
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 7
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type OC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
  recommend:
    - profile: "slave"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigBoundary.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: boundary
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "boundary"
      ptp4lOpts: "-2"
      phc2sysOpts: "-a -r -n 24"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      ptp4lConf: |
        # The interface name is hardware-specific
        [$iface_slave]
        masterOnly 0
        [$iface_master_1]
        masterOnly 1
        [$iface_master_2]
        masterOnly 1
        [$iface_master_3]
        masterOnly 1
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 0
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 248
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval 0
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 135
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type BC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
  recommend:
    - profile: "boundary"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigForHA.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: boundary-ha
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "boundary-ha"
      ptp4lOpts: ""
      phc2sysOpts: "-a -r -n 24"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
        haProfiles: "$profile1,$profile2"
  recommend:
    - profile: "boundary-ha"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigDualCardGmWpc.yaml

# The grandmaster profile is provided for testing only
# It is not installed on production clusters
# In this example two cards $iface_nic1 and $iface_nic2 are connected via
# SMA1 ports by a cable and $iface_nic2 receives 1PPS signals from $iface_nic1
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: grandmaster
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "grandmaster"
      ptp4lOpts: "-2 --summary_interval -4"
      phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s $iface_nic1 -n 24
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      plugins:
        e810:
          enableDefaultConfig: false
          settings:
            LocalMaxHoldoverOffSet: 1500
            LocalHoldoverTimeout: 14400
            MaxInSpecOffset: 100
          pins: $e810_pins
          #  "$iface_nic1":
          #    "U.FL2": "0 2"
          #    "U.FL1": "0 1"
          #    "SMA2": "0 2"
          #    "SMA1": "2 1"
          #  "$iface_nic2":
          #    "U.FL2": "0 2"
          #    "U.FL1": "0 1"
          #    "SMA2": "0 2"
          #    "SMA1": "1 1"
          ublxCmds:
            - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1
                - "-P"
                - "29.20"
                - "-z"
                - "CFG-HW-ANT_CFG_VOLTCTRL,1"
              reportOutput: false
            - args: #ubxtool -P 29.20 -e GPS
                - "-P"
                - "29.20"
                - "-e"
                - "GPS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d Galileo
                - "-P"
                - "29.20"
                - "-d"
                - "Galileo"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d GLONASS
                - "-P"
                - "29.20"
                - "-d"
                - "GLONASS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d BeiDou
                - "-P"
                - "29.20"
                - "-d"
                - "BeiDou"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d SBAS
                - "-P"
                - "29.20"
                - "-d"
                - "SBAS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000
                - "-P"
                - "29.20"
                - "-t"
                - "-w"
                - "5"
                - "-v"
                - "1"
                - "-e"
                - "SURVEYIN,600,50000"
              reportOutput: true
            - args: #ubxtool -P 29.20 -p MON-HW
                - "-P"
                - "29.20"
                - "-p"
                - "MON-HW"
              reportOutput: true
            - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248
                - "-P"
                - "29.20"
                - "-p"
                - "CFG-MSG,1,38,248"
              reportOutput: true
      ts2phcOpts: " "
      ts2phcConf: |
        [nmea]
        ts2phc.master 1
        [global]
        use_syslog  0
        verbose 1
        logging_level 7
        ts2phc.pulsewidth 100000000
        #cat /dev/GNSS to find available serial port
        #example value of gnss_serialport is /dev/ttyGNSS_1700_0
        ts2phc.nmea_serialport $gnss_serialport
        leapfile  /usr/share/zoneinfo/leap-seconds.list
        [$iface_nic1]
        ts2phc.extts_polarity rising
        ts2phc.extts_correction 0
        [$iface_nic2]
        ts2phc.master 0
        ts2phc.extts_polarity rising
        #this is a measured value in nanoseconds to compensate for SMA cable delay
        ts2phc.extts_correction -10
      ptp4lConf: |
        [$iface_nic1]
        masterOnly 1
        [$iface_nic1_1]
        masterOnly 1
        [$iface_nic1_2]
        masterOnly 1
        [$iface_nic1_3]
        masterOnly 1
        [$iface_nic2]
        masterOnly 1
        [$iface_nic2_1]
        masterOnly 1
        [$iface_nic2_2]
        masterOnly 1
        [$iface_nic2_3]
        masterOnly 1
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 6
        clockAccuracy 0x27
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval 0
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval -4
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 7
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        clock_servo pi
        sanity_freq_limit  200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type BC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 1
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0x20
  recommend:
    - profile: "grandmaster"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigGmWpc.yaml

# The grandmaster profile is provided for testing only
# It is not installed on production clusters
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: grandmaster
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "grandmaster"
      ptp4lOpts: "-2 --summary_interval -4"
      phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s $iface_master -n 24
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      plugins:
        e810:
          enableDefaultConfig: false
          settings:
            LocalMaxHoldoverOffSet: 1500
            LocalHoldoverTimeout: 14400
            MaxInSpecOffset: 100
          pins: $e810_pins
          #  "$iface_master":
          #    "U.FL2": "0 2"
          #    "U.FL1": "0 1"
          #    "SMA2": "0 2"
          #    "SMA1": "0 1"
          ublxCmds:
            - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1
                - "-P"
                - "29.20"
                - "-z"
                - "CFG-HW-ANT_CFG_VOLTCTRL,1"
              reportOutput: false
            - args: #ubxtool -P 29.20 -e GPS
                - "-P"
                - "29.20"
                - "-e"
                - "GPS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d Galileo
                - "-P"
                - "29.20"
                - "-d"
                - "Galileo"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d GLONASS
                - "-P"
                - "29.20"
                - "-d"
                - "GLONASS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d BeiDou
                - "-P"
                - "29.20"
                - "-d"
                - "BeiDou"
              reportOutput: false
            - args: #ubxtool -P 29.20 -d SBAS
                - "-P"
                - "29.20"
                - "-d"
                - "SBAS"
              reportOutput: false
            - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000
                - "-P"
                - "29.20"
                - "-t"
                - "-w"
                - "5"
                - "-v"
                - "1"
                - "-e"
                - "SURVEYIN,600,50000"
              reportOutput: true
            - args: #ubxtool -P 29.20 -p MON-HW
                - "-P"
                - "29.20"
                - "-p"
                - "MON-HW"
              reportOutput: true
            - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248
                - "-P"
                - "29.20"
                - "-p"
                - "CFG-MSG,1,38,248"
              reportOutput: true
      ts2phcOpts: " "
      ts2phcConf: |
        [nmea]
        ts2phc.master 1
        [global]
        use_syslog  0
        verbose 1
        logging_level 7
        ts2phc.pulsewidth 100000000
        #cat /dev/GNSS to find available serial port
        #example value of gnss_serialport is /dev/ttyGNSS_1700_0
        ts2phc.nmea_serialport $gnss_serialport
        leapfile  /usr/share/zoneinfo/leap-seconds.list
        [$iface_master]
        ts2phc.extts_polarity rising
        ts2phc.extts_correction 0
      ptp4lConf: |
        [$iface_master]
        masterOnly 1
        [$iface_master_1]
        masterOnly 1
        [$iface_master_2]
        masterOnly 1
        [$iface_master_3]
        masterOnly 1
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 6
        clockAccuracy 0x27
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval 0
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval -4
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 7
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        clock_servo pi
        sanity_freq_limit  200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type BC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0x20
  recommend:
    - profile: "grandmaster"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpConfigSlave.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
  name: ordinary
  namespace: openshift-ptp
  annotations: {}
spec:
  profile:
    - name: "ordinary"
      # The interface name is hardware-specific
      interface: $interface
      ptp4lOpts: "-2 -s"
      phc2sysOpts: "-a -r -n 24"
      ptpSchedulingPolicy: SCHED_FIFO
      ptpSchedulingPriority: 10
      ptpSettings:
        logReduce: "true"
      ptp4lConf: |
        [global]
        #
        # Default Data Set
        #
        twoStepFlag 1
        slaveOnly 1
        priority1 128
        priority2 128
        domainNumber 24
        #utc_offset 37
        clockClass 255
        clockAccuracy 0xFE
        offsetScaledLogVariance 0xFFFF
        free_running 0
        freq_est_interval 1
        dscp_event 0
        dscp_general 0
        dataset_comparison G.8275.x
        G.8275.defaultDS.localPriority 128
        #
        # Port Data Set
        #
        logAnnounceInterval -3
        logSyncInterval -4
        logMinDelayReqInterval -4
        logMinPdelayReqInterval -4
        announceReceiptTimeout 3
        syncReceiptTimeout 0
        delayAsymmetry 0
        fault_reset_interval -4
        neighborPropDelayThresh 20000000
        masterOnly 0
        G.8275.portDS.localPriority 128
        #
        # Run time options
        #
        assume_two_step 0
        logging_level 6
        path_trace_enabled 0
        follow_up_info 0
        hybrid_e2e 0
        inhibit_multicast_service 0
        net_sync_monitor 0
        tc_spanning_tree 0
        tx_timestamp_timeout 50
        unicast_listen 0
        unicast_master_table 0
        unicast_req_duration 3600
        use_syslog 1
        verbose 0
        summary_interval 0
        kernel_leap 1
        check_fup_sync 0
        clock_class_threshold 7
        #
        # Servo Options
        #
        pi_proportional_const 0.0
        pi_integral_const 0.0
        pi_proportional_scale 0.0
        pi_proportional_exponent -0.3
        pi_proportional_norm_max 0.7
        pi_integral_scale 0.0
        pi_integral_exponent 0.4
        pi_integral_norm_max 0.3
        step_threshold 2.0
        first_step_threshold 0.00002
        max_frequency 900000000
        clock_servo pi
        sanity_freq_limit 200000000
        ntpshm_segment 0
        #
        # Transport options
        #
        transportSpecific 0x0
        ptp_dst_mac 01:1B:19:00:00:00
        p2p_dst_mac 01:80:C2:00:00:0E
        udp_ttl 1
        udp6_scope 0x0E
        uds_address /var/run/ptp4l
        #
        # Default interface options
        #
        clock_type OC
        network_transport L2
        delay_mechanism E2E
        time_stamping hardware
        tsproc_mode filter
        delay_filter moving_median
        delay_filter_length 10
        egressLatency 0
        ingressLatency 0
        boundary_clock_jbod 0
        #
        # Clock description
        #
        productDescription ;;
        revisionData ;;
        manufacturerIdentity 00:00:00
        userDescription ;
        timeSource 0xA0
  recommend:
    - profile: "ordinary"
      priority: 4
      match:
        - nodeLabel: "node-role.kubernetes.io/$mcp"

PtpOperatorConfig.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpOperatorConfig
metadata:
  name: default
  namespace: openshift-ptp
  annotations: {}
spec:
  daemonNodeSelector:
    node-role.kubernetes.io/$mcp: ""

PtpOperatorConfigForEvent.yaml

apiVersion: ptp.openshift.io/v1
kind: PtpOperatorConfig
metadata:
  name: default
  namespace: openshift-ptp
  annotations: {}
spec:
  daemonNodeSelector:
    node-role.kubernetes.io/$mcp: ""
  ptpEventConfig:
    apiVersion: $event_api_version
    enableEventPublisher: true
    transportHost: "http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043"

PtpSubscription.yaml

---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: ptp-operator-subscription
  namespace: openshift-ptp
  annotations: {}
spec:
  channel: "stable"
  name: ptp-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

PtpSubscriptionNS.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-ptp
  annotations:
    workload.openshift.io/allowed: management
  labels:
    openshift.io/cluster-monitoring: "true"

PtpSubscriptionOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: ptp-operators
  namespace: openshift-ptp
  annotations: {}
spec:
  targetNamespaces:
    - openshift-ptp

AcceleratorsNS.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: vran-acceleration-operators
  annotations: {}

AcceleratorsOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: vran-operators
  namespace: vran-acceleration-operators
  annotations: {}
spec:
  targetNamespaces:
    - vran-acceleration-operators

AcceleratorsSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: sriov-fec-subscription
  namespace: vran-acceleration-operators
  annotations: {}
spec:
  channel: stable
  name: sriov-fec
  source: certified-operators
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

SriovFecClusterConfig.yaml

apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
  name: config
  namespace: vran-acceleration-operators
  annotations: {}
spec:
  drainSkip: $drainSkip # true if SNO, false by default
  priority: 1
  nodeSelector:
    node-role.kubernetes.io/master: ""
  acceleratorSelector:
    pciAddress: $pciAddress
  physicalFunction:
    pfDriver: "vfio-pci"
    vfDriver: "vfio-pci"
    vfAmount: 16
    bbDevConfig: $bbDevConfig
#Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100
#Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000

SriovNetwork.yaml

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: ""
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  #  resourceName: ""
  networkNamespace: openshift-sriov-network-operator
#  vlan: ""
#  spoofChk: ""
#  ipam: ""
#  linkState: ""
#  maxTxRate: ""
#  minTxRate: ""
#  vlanQoS: ""
#  trust: ""
#  capabilities: ""

SriovNetworkNodePolicy.yaml

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: $name
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  # The attributes for Mellanox/Intel based NICs as below.
  #     deviceType: netdevice/vfio-pci
  #     isRdma: true/false
  deviceType: $deviceType
  isRdma: $isRdma
  nicSelector:
    # The exact physical function name must match the hardware used
    pfNames: [$pfNames]
  nodeSelector:
    node-role.kubernetes.io/$mcp: ""
  numVfs: $numVfs
  priority: $priority
  resourceName: $resourceName

SriovOperatorConfig.yaml

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
  name: default
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  configDaemonNodeSelector:
    "node-role.kubernetes.io/$mcp": ""
  # Injector and OperatorWebhook pods can be disabled (set to "false") below
  # to reduce the number of management pods. It is recommended to start with the
  # webhook and injector pods enabled, and only disable them after verifying the
  # correctness of user manifests.
  #   If the injector is disabled, containers using sr-iov resources must explicitly assign
  #   them in the  "requests"/"limits" section of the container spec, for example:
  #    containers:
  #    - name: my-sriov-workload-container
  #      resources:
  #        limits:
  #          openshift.io/<resource_name>:  "1"
  #        requests:
  #          openshift.io/<resource_name>:  "1"
  enableInjector: false
  enableOperatorWebhook: false
  logLevel: 0

SriovOperatorConfigForSNO.yaml

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
  name: default
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  configDaemonNodeSelector:
    "node-role.kubernetes.io/$mcp": ""
  # Injector and OperatorWebhook pods can be disabled (set to "false") below
  # to reduce the number of management pods. It is recommended to start with the
  # webhook and injector pods enabled, and only disable them after verifying the
  # correctness of user manifests.
  #   If the injector is disabled, containers using sr-iov resources must explicitly assign
  #   them in the  "requests"/"limits" section of the container spec, for example:
  #    containers:
  #    - name: my-sriov-workload-container
  #      resources:
  #        limits:
  #          openshift.io/<resource_name>:  "1"
  #        requests:
  #          openshift.io/<resource_name>:  "1"
  enableInjector: false
  enableOperatorWebhook: false
  # Disable drain is needed for Single Node Openshift
  disableDrain: true
  logLevel: 0

SriovSubscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: sriov-network-operator-subscription
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  channel: "stable"
  name: sriov-network-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
status:
  state: AtLatestKnown

SriovSubscriptionNS.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: openshift-sriov-network-operator
  annotations:
    workload.openshift.io/allowed: management

SriovSubscriptionOperGroup.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: sriov-network-operators
  namespace: openshift-sriov-network-operator
  annotations: {}
spec:
  targetNamespaces:
    - openshift-sriov-network-operator

3.2.4.4.2. 集群调优参考 YAML

example-sno.yaml

# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno
---
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
  name: "example-sno"
  namespace: "example-sno"
spec:
  baseDomain: "example.com"
  pullSecretRef:
    name: "assisted-deployment-pull-secret"
  clusterImageSetNameRef: "openshift-4.16"
  sshPublicKey: "ssh-rsa AAAA..."
  clusters:
    - clusterName: "example-sno"
      networkType: "OVNKubernetes"
      # installConfigOverrides is a generic way of passing install-config
      # parameters through the siteConfig.  The 'capabilities' field configures
      # the composable openshift feature.  In this 'capabilities' setting, we
      # remove all the optional set of components.
      # Notes:
      # - OperatorLifecycleManager is needed for 4.15 and later
      # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier
      # - Ingress is needed for 4.16 and later
      installConfigOverrides: |
        {
          "capabilities": {
            "baselineCapabilitySet": "None",
            "additionalEnabledCapabilities": [
              "NodeTuning",
              "OperatorLifecycleManager",
              "Ingress"
            ]
          }
        }
      # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+.
      # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest.
      # extraManifestPath: sno-extra-manifest
      clusterLabels:
        # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples
        du-profile: "latest"
        # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates:
        # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true'
        common: true
        # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""'
        group-du-sno: ""
        # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"'
        # Normally this should match or contain the cluster name so it only applies to a single cluster
        sites: "example-sno"
      clusterNetwork:
        - cidr: 1001:1::/48
          hostPrefix: 64
      machineNetwork:
        - cidr: 1111:2222:3333:4444::/64
      serviceNetwork:
        - 1001:2::/112
      additionalNTPSources:
        - 1111:2222:3333:4444::2
      # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate
      # please see Workload Partitioning Feature for a complete guide.
      cpuPartitioningMode: AllNodes
      # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster:
      #crTemplates:
      #  KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml"
      nodes:
        - hostName: "example-node1.example.com"
          role: "master"
          # Optionally; This can be used to configure desired BIOS setting on a host:
          #biosConfigRef:
          #  filePath: "example-hw.profile"
          bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1"
          bmcCredentialsName:
            name: "example-node1-bmh-secret"
          bootMACAddress: "AA:BB:CC:DD:EE:11"
          # Use UEFISecureBoot to enable secure boot.
          bootMode: "UEFISecureBoot"
          rootDeviceHints:
            deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0"
          # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details
          ignitionConfigOverride: |
            {
              "ignition": {
                "version": "3.2.0"
              },
              "storage": {
                "disks": [
                  {
                    "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62",
                    "partitions": [
                      {
                        "label": "var-lib-containers",
                        "sizeMiB": 0,
                        "startMiB": 250000
                      }
                    ],
                    "wipeTable": false
                  }
                ],
                "filesystems": [
                  {
                    "device": "/dev/disk/by-partlabel/var-lib-containers",
                    "format": "xfs",
                    "mountOptions": [
                      "defaults",
                      "prjquota"
                    ],
                    "path": "/var/lib/containers",
                    "wipeFilesystem": true
                  }
                ]
              },
              "systemd": {
                "units": [
                  {
                    "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target",
                    "enabled": true,
                    "name": "var-lib-containers.mount"
                  }
                ]
              }
            }
          nodeNetwork:
            interfaces:
              - name: eno1
                macAddress: "AA:BB:CC:DD:EE:11"
            config:
              interfaces:
                - name: eno1
                  type: ethernet
                  state: up
                  ipv4:
                    enabled: false
                  ipv6:
                    enabled: true
                    address:
                      # For SNO sites with static IP addresses, the node-specific,
                      # API and Ingress IPs should all be the same and configured on
                      # the interface
                      - ip: 1111:2222:3333:4444::aaaa:1
                        prefix-length: 64
              dns-resolver:
                config:
                  search:
                    - example.com
                  server:
                    - 1111:2222:3333:4444::2
              routes:
                config:
                  - destination: ::/0
                    next-hop-interface: eno1
                    next-hop-address: 1111:2222:3333:4444::1
                    table-id: 254

ConsoleOperatorDisable.yaml

apiVersion: operator.openshift.io/v1
kind: Console
metadata:
  annotations:
    include.release.openshift.io/ibm-cloud-managed: "false"
    include.release.openshift.io/self-managed-high-availability: "false"
    include.release.openshift.io/single-node-developer: "false"
    release.openshift.io/create-only: "true"
  name: cluster
spec:
  logLevel: Normal
  managementState: Removed
  operatorLogLevel: Normal

09-openshift-marketplace-ns.yaml

# Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml
# Update it as the source evolves.
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/node-selector: ""
    workload.openshift.io/allowed: "management"
  labels:
    openshift.io/cluster-monitoring: "true"
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/enforce-version: v1.25
    pod-security.kubernetes.io/audit: baseline
    pod-security.kubernetes.io/audit-version: v1.25
    pod-security.kubernetes.io/warn: baseline
    pod-security.kubernetes.io/warn-version: v1.25
  name: "openshift-marketplace"

DefaultCatsrc.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: default-cat-source
  namespace: openshift-marketplace
  annotations:
    target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}'
spec:
  displayName: default-cat-source
  image: $imageUrl
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 1h
status:
  connectionState:
    lastObservedState: READY

DisableOLMPprof.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: collect-profiles-config
  namespace: openshift-operator-lifecycle-manager
  annotations: {}
data:
  pprof-config.yaml: |
    disabled: True

DisconnectedICSP.yaml

apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
  name: disconnected-internal-icsp
  annotations: {}
spec:
#    repositoryDigestMirrors:
#    - $mirrors

OperatorHub.yaml

apiVersion: config.openshift.io/v1
kind: OperatorHub
metadata:
  name: cluster
  annotations: {}
spec:
  disableAllDefaultSources: true

ReduceMonitoringFootprint.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
  annotations: {}
data:
  config.yaml: |
    alertmanagerMain:
      enabled: false
    telemeterClient:
      enabled: false
    prometheusK8s:
       retention: 24h

DisableSnoNetworkDiag.yaml

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
  annotations: {}
spec:
  disableNetworkDiagnostics: true

3.2.4.4.3. 机器配置引用 YAML

enable-crun-master.yaml

apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
  name: enable-crun-master
spec:
  machineConfigPoolSelector:
    matchLabels:
      pools.operator.machineconfiguration.openshift.io/master: ""
  containerRuntimeConfig:
    defaultRuntime: crun

enable-crun-worker.yaml

apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
  name: enable-crun-worker
spec:
  machineConfigPoolSelector:
    matchLabels:
      pools.operator.machineconfiguration.openshift.io/worker: ""
  containerRuntimeConfig:
    defaultRuntime: crun

99-crio-disable-wipe-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-crio-disable-wipe-master
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo=
          mode: 420
          path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml

99-crio-disable-wipe-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-crio-disable-wipe-worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo=
          mode: 420
          path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml

06-kdump-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 06-kdump-enable-master
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - enabled: true
          name: kdump.service
  kernelArguments:
    - crashkernel=512M

06-kdump-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 06-kdump-enable-worker
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - enabled: true
          name: kdump.service
  kernelArguments:
    - crashkernel=512M

01-container-mount-ns-and-kubelet-conf-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: container-mount-namespace-and-kubelet-conf-master
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
          mode: 493
          path: /usr/local/bin/extractExecStart
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
          mode: 493
          path: /usr/local/bin/nsenterCmns
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts

            [Service]
            Type=oneshot
            RemainAfterExit=yes
            RuntimeDirectory=container-mount-namespace
            Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
            Environment=BIND_POINT=%t/container-mount-namespace/mnt
            ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
            ExecStartPre=touch ${BIND_POINT}
            ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
            ExecStop=umount -R ${RUNTIME_DIRECTORY}
          name: container-mount-namespace.service
        - dropins:
            - contents: |
                [Unit]
                Wants=container-mount-namespace.service
                After=container-mount-namespace.service

                [Service]
                ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
                EnvironmentFile=-/%t/%N-execstart.env
                ExecStart=
                ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
                    ${ORIG_EXECSTART}"
              name: 90-container-mount-namespace.conf
          name: crio.service
        - dropins:
            - contents: |
                [Unit]
                Wants=container-mount-namespace.service
                After=container-mount-namespace.service

                [Service]
                ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
                EnvironmentFile=-/%t/%N-execstart.env
                ExecStart=
                ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
                    ${ORIG_EXECSTART} --housekeeping-interval=30s"
              name: 90-container-mount-namespace.conf
            - contents: |
                [Service]
                Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
                Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
              name: 30-kubelet-interval-tuning.conf
          name: kubelet.service

01-container-mount-ns-and-kubelet-conf-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: container-mount-namespace-and-kubelet-conf-worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
          mode: 493
          path: /usr/local/bin/extractExecStart
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
          mode: 493
          path: /usr/local/bin/nsenterCmns
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts

            [Service]
            Type=oneshot
            RemainAfterExit=yes
            RuntimeDirectory=container-mount-namespace
            Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
            Environment=BIND_POINT=%t/container-mount-namespace/mnt
            ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
            ExecStartPre=touch ${BIND_POINT}
            ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
            ExecStop=umount -R ${RUNTIME_DIRECTORY}
          name: container-mount-namespace.service
        - dropins:
            - contents: |
                [Unit]
                Wants=container-mount-namespace.service
                After=container-mount-namespace.service

                [Service]
                ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
                EnvironmentFile=-/%t/%N-execstart.env
                ExecStart=
                ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
                    ${ORIG_EXECSTART}"
              name: 90-container-mount-namespace.conf
          name: crio.service
        - dropins:
            - contents: |
                [Unit]
                Wants=container-mount-namespace.service
                After=container-mount-namespace.service

                [Service]
                ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
                EnvironmentFile=-/%t/%N-execstart.env
                ExecStart=
                ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
                    ${ORIG_EXECSTART} --housekeeping-interval=30s"
              name: 90-container-mount-namespace.conf
            - contents: |
                [Service]
                Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
                Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
              name: 30-kubelet-interval-tuning.conf
          name: kubelet.service

99-sync-time-once-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-sync-time-once-master
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Sync time once
            After=network-online.target
            Wants=network-online.target
            [Service]
            Type=oneshot
            TimeoutStartSec=300
            ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0'
            ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q
            RemainAfterExit=yes
            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: sync-time-once.service

99-sync-time-once-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-sync-time-once-worker
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Sync time once
            After=network-online.target
            Wants=network-online.target
            [Service]
            Type=oneshot
            TimeoutStartSec=300
            ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0'
            ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q
            RemainAfterExit=yes
            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: sync-time-once.service

03-sctp-machine-config-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: load-sctp-module-master
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
        - contents:
            source: data:,
            verification: {}
          filesystem: root
          mode: 420
          path: /etc/modprobe.d/sctp-blacklist.conf
        - contents:
            source: data:text/plain;charset=utf-8,sctp
          filesystem: root
          mode: 420
          path: /etc/modules-load.d/sctp-load.conf

03-sctp-machine-config-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: load-sctp-module-worker
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
        - contents:
            source: data:,
            verification: {}
          filesystem: root
          mode: 420
          path: /etc/modprobe.d/sctp-blacklist.conf
        - contents:
            source: data:text/plain;charset=utf-8,sctp
          filesystem: root
          mode: 420
          path: /etc/modules-load.d/sctp-load.conf

08-set-rcu-normal-master.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 08-set-rcu-normal-master
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK
          mode: 493
          path: /usr/local/bin/set-rcu-normal.sh
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1

            [Service]
            Type=simple
            ExecStart=/usr/local/bin/set-rcu-normal.sh

            # Maximum wait time is 600s = 10m:
            Environment=MAXIMUM_WAIT_TIME=600

            # Steady-state threshold = 2%
            # Allowed values:
            #  4  - absolute pod count (+/-)
            #  4% - percent change (+/-)
            #  -1 - disable the steady-state check
            # Note: '%' must be escaped as '%%' in systemd unit files
            Environment=STEADY_STATE_THRESHOLD=2%%

            # Steady-state window = 120s
            # If the running pod count stays within the given threshold for this time
            # period, return CPU utilization to normal before the maximum wait time has
            # expires
            Environment=STEADY_STATE_WINDOW=120

            # Steady-state minimum = 40
            # Increasing this will skip any steady-state checks until the count rises above
            # this number to avoid false positives if there are some periods where the
            # count doesn't increase but we know we can't be at steady-state yet.
            Environment=STEADY_STATE_MINIMUM=40

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: set-rcu-normal.service

08-set-rcu-normal-worker.yaml

# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 08-set-rcu-normal-worker
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK
          mode: 493
          path: /usr/local/bin/set-rcu-normal.sh
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1

            [Service]
            Type=simple
            ExecStart=/usr/local/bin/set-rcu-normal.sh

            # Maximum wait time is 600s = 10m:
            Environment=MAXIMUM_WAIT_TIME=600

            # Steady-state threshold = 2%
            # Allowed values:
            #  4  - absolute pod count (+/-)
            #  4% - percent change (+/-)
            #  -1 - disable the steady-state check
            # Note: '%' must be escaped as '%%' in systemd unit files
            Environment=STEADY_STATE_THRESHOLD=2%%

            # Steady-state window = 120s
            # If the running pod count stays within the given threshold for this time
            # period, return CPU utilization to normal before the maximum wait time has
            # expires
            Environment=STEADY_STATE_WINDOW=120

            # Steady-state minimum = 40
            # Increasing this will skip any steady-state checks until the count rises above
            # this number to avoid false positives if there are some periods where the
            # count doesn't increase but we know we can't be at steady-state yet.
            Environment=STEADY_STATE_MINIMUM=40

            [Install]
            WantedBy=multi-user.target
          enabled: true
          name: set-rcu-normal.service

3.2.5. Telco RAN DU 参考配置软件规格

以下信息描述了电信 RAN DU 参考规格 (RDS) 验证的软件版本。

3.2.5.1. Telco RAN DU 4.17 验证软件组件

Red Hat Telco RAN DU 4.17 解决方案已在为 OpenShift Container Platform 受管集群和 hub 集群使用以下红帽产品进行验证。

表 3.7. Telco RAN DU 受管集群验证的软件组件
组件软件版本

受管集群版本

4.17

Cluster Logging Operator

6.0

Local Storage Operator

4.17

OpenShift API for Data Protection (OADP)

1.4.1

PTP Operator

4.17

SRIOV Operator

4.17

SRIOV-FEC Operator

2.9

生命周期代理

4.17

表 3.8. hub 集群验证的软件组件
组件软件版本

hub 集群版本

4.17

Red Hat Advanced Cluster Management (RHACM)

2.11

GitOps ZTP 插件

4.17

Red Hat OpenShift GitOps

1.13

Topology Aware Lifecycle Manager (TALM)

4.17

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.