监管


Red Hat Advanced Cluster Management for Kubernetes 2.11

监管

摘要

了解更多有关监管策略框架的信息,这有助于使用策略强化集群安全性。

第 1 章 安全概述

管理 Red Hat Advanced Cluster Management for Kubernetes 组件的安全性。使用定义的策略和流程监管集群,以识别并最大程度降低风险。使用策略来定义规则和设置控制。

先决条件:您必须为 Red Hat Advanced Cluster Management for Kubernetes 配置身份验证服务要求。如需更多信息,请参阅 访问控制

阅读以下主题以了解更多有关保护集群的信息:

1.1. 证书简介

您可以使用各种证书来验证 Red Hat Advanced Cluster Management for Kubernetes 集群的真实性。继续阅读以了解证书管理的信息。

1.2. 证书

在 Red Hat Advanced Cluster Management 中运行的服务所需的所有证书都是在 Red Hat Advanced Cluster Management 安装过程中创建的。查看以下证书列表,这些证书由 Red Hat OpenShift Container Platform 的以下组件创建和管理:

  • OpenShift Service Serving 证书
  • Red Hat Advanced Cluster Management Webhook 控制器
  • Kubernetes 证书 API
  • OpenShift 默认入口

需要的访问权限: 集群管理员

继续阅读以了解有关证书管理的更多信息:

注意 :用户负责证书轮转和更新。

1.2.1. Red Hat Advanced Cluster Management hub 集群证书

OpenShift 默认入口证书技术是一个 hub 集群证书。安装 Red Hat Advanced Cluster Management 后,会创建可观察性证书,供可观察性组件用来在 hub 集群和受管集群之间的流量上提供 mutual TLS。

  • open-cluster-management-observability 命名空间包含以下证书:

    • Observability-server-ca-certs:获取 CA 证书来签署服务器端证书
    • Observability-client-ca-certs:获取 CA 证书来签署客户端的证书
    • Observability-server-certs:获取由 observability-observatorium-api 部署使用的服务器证书
    • Observability-grafana-certs:获取 observability-rbac-query-proxy 部署使用的客户端证书
  • open-cluster-management-addon-observability 命名空间在受管集群中包含以下证书:

    • Observability-managed-cluster-certs :与 hub 服务器中的 observability-server-ca-certs 相同的服务器 CA 证书
    • observability-controller-open-cluster-management.io-observability-signer-client-cert:由被 metrics-collector-deployment 使用的客户证书

CA 证书的有效期为五年,其他证书的有效期为一年。所有可观察证书会在过期后自动刷新。查看以下列表以了解证书自动更新时的影响:

  • 当剩余的有效时间不超过 73 天时,非 CA 证书将自动续订。续订证书后,相关部署中的 Pod 会自动重启以使用更新的证书。
  • 当剩余有效时间不超过一年时,CA 证书会被自动续订。证书被续订后,旧的 CA 不会被删除,而是与更新的 CA 共存。相关部署同时使用旧和更新的证书,并继续工作。旧的 CA 证书在过期时会被删除。
  • 当证书被续订时,hub 集群和受管集群之间的流量不会中断。

查看以下 Red Hat Advanced Cluster Management hub 集群证书表:

表 1.1. Red Hat Advanced Cluster Management hub 集群证书
NamespaceSecret 名称Pod 标签 

open-cluster-management

channels-apps-open-cluster-management-webhook-svc-ca

app=multicluster-operators-channel

open-cluster-management

channels-apps-open-cluster-management-webhook-svc-signed-ca

app=multicluster-operators-channel

open-cluster-management

multicluster-operators-application-svc-ca

app=multicluster-operators-application

open-cluster-management

multicluster-operators-application-svc-signed-ca

app=multicluster-operators-application

open-cluster-management-hub

registration-webhook-serving-cert signer-secret

不是必需的

open-cluster-management-hub

1.2.2. Red Hat Advanced Cluster Management 管理的证书

查看下表,了解包含 Red Hat Advanced Cluster Management 管理的证书和相关 secret 的组件 pod 的总结列表:

表 1.2. 包含 Red Hat Advanced Cluster Management 管理证书的 Pod
NamespaceSecret 名称(如果适用)

open-cluster-management-agent-addon

cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert

open-cluster-management-agent-addon

cluster-proxy-service-proxy-server-certificates

1.2.2.1. 受管集群证书

您可以使用证书通过 hub 集群验证受管集群。因此,了解与这些证书关联的故障排除方案非常重要。

受管集群证书会自动刷新。

1.2.3. 其他资源

1.2.4. 管理证书

继续阅读以了解有关如何刷新、替换、轮转和列出证书的信息。

1.2.4.1. 刷新 Red Hat Advanced Cluster Management Webhook 证书

您可以刷新 Red Hat Advanced Cluster Management 受管证书,它们是由 Red Hat Advanced Cluster Management 服务创建和管理的证书。

完成以下步骤以刷新 Red Hat Advanced Cluster Management 管理的证书:

  1. 运行以下命令,删除与 Red Hat Advanced Cluster Management 管理证书关联的 secret:

    oc delete secret -n <namespace> <secret> 1
    1
    <namespace><secret> 替换为您要使用的值。
  2. 运行以下命令,重启与 Red Hat Advanced Cluster Management 受管证书关联的服务:

    oc delete pod -n <namespace> -l <pod-label> 1
    1
    <namespace><pod-label> 替换为 Red Hat Advanced Cluster Management 受管集群证书 表中的值。

    注: 如果没有指定 pod-label,则不会重启任何服务。secret 被重新创建并自动使用。

1.2.4.2. 替换 alertmanager 路由的证书

如果您不想使用 OpenShift 默认入口证书,请通过更新 alertmanager 路由来替换 observability alertmanager 证书。完成以下步骤:

  1. 使用以下命令检查可观察证书:

    openssl x509  -noout -text -in ./observability.crt
  2. 将证书上的通用名称 (CN) 更改为 alertmanager
  3. 使用 alertmanager 路由的主机名更改 csr.cnf 配置文件中的 SAN。
  4. open-cluster-management-observability 命名空间中创建以下两个 secret。运行以下命令:

    oc -n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key
    
    oc -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key
1.2.4.3. 轮转 gatekeeper Webhook 证书

完成以下步骤以轮转 gatekeeper Webhook 证书:

  1. 使用以下命令编辑包含证书的 secret:

    oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
  2. 删除 data 部分中的以下内容: ca.crtca.keytls.crttls.key
  3. 使用以下命令删除 gatekeeper-controller-manager pod 来重启 gatekeeper Webhook 服务:

    oc delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager

gatekeeper Webhook 证书被轮转。

1.2.4.4. 验证证书轮转

按照以下流程验证您的证书是否已轮转:

  1. 找到要检查的 secret。
  2. 检查 tls.crt 密钥以验证证书是否可用。
  3. 使用以下命令显示证书信息:

    oc get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout

    <your-secret-name> 替换为您要验证的 secret 的名称。如果需要,还要更新命名空间和 JSON 路径。

  4. 检查输出中的 Validity 详情。查看以下 Validity 示例:

    Validity
                Not Before: Jul 13 15:17:50 2023 GMT 1
                Not After : Jul 12 15:17:50 2024 GMT 2
    1
    Not Before 值是您轮转证书的日期和时间。
    2
    Not After 值是证书过期的日期和时间。
1.2.4.5. 列出 hub 集群受管证书

您可以查看在内部使用 OpenShift Service Serving 证书服务的 hub 集群受管证书列表。运行以下命令列出证书:

for ns in multicluster-engine open-cluster-management ; do echo "$ns:" ; oc get secret -n $ns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\.beta\\.openshift\\.io/expiry | grep -v '<none>' ; echo ""; done

如需更多信息,请参阅附加资源部分中的 OpenShift Service Serving 证书部分。

:如果启用了可观察性,则还需要额外的命名空间来创建证书。

1.2.4.6. 其他资源

第 2 章 监管

对于在私有云、多云和混合云环境中部署的工作负载,企业必须满足内部对软件工程、安全工程、弹性、安全性以及规范标准的要求。Red Hat Advanced Cluster Management for Kubernetes 监管功能为企业引进自己的安全策略提供了一个可扩展的策略框架。继续阅读 Red Hat Advanced Cluster Management 监管框架的相关主题:

2.1. 监管架构

使用 Red Hat Advanced Cluster Management for Kubernetes 监管声明周期来增强集群的安全性。产品监管生命周期基于使用受支持的策略、流程和程序从中央接口页面管理安全和合规性。参阅以下监管架构图:

Governance architecture diagram

查看以下监管架构图的组件描述:

  • 策略传播器控制器: 在 Red Hat Advanced Cluster Management hub 集群上运行,并根据绑定到 root 策略的放置在 hub 上的受管集群命名空间中生成复制策略。它还将来自复制策略的合规性状态聚合到根策略状态,并根据绑定到 root 策略的策略自动化启动自动化。
  • 监管策略附加组件控制器: 在 Red Hat Advanced Cluster Management hub 集群上运行,并在受管集群中管理策略控制器的安装。
  • 监管策略框架 : 以前的镜像代表作为受管集群中的 governance-policy-framework pod 运行的框架,并包含以下控制器:

    • spec sync controller: 将 hub 集群上的受管集群命名空间中的复制策略同步到受管集群上的受管集群命名空间。
    • Status sync controller : 从 hub 和受管集群的复制策略中的策略控制器记录合规事件。状态仅包含与当前策略相关的更新,如果策略被删除和重新创建,则不会考虑过去的状态。
    • 模板同步控制器: 根据复制策略 spec.policy-templates 条目中的定义,创建、更新和删除受管集群命名空间中的对象。
    • Gatekeeper sync controller : 在相应的 Red Hat Advanced Cluster Management 策略中记录 Gatekeeper 约束审计作为合规事件。

2.1.1. 监管架构组件

监管架构还包括以下组件:

  • 监管仪表板:提供云监管和风险详情的概述信息,其中包括策略和集群违反情况。请参阅 管理监管仪表板 部分,以了解 Red Hat Advanced Cluster Management for Kubernetes 策略的结构,以及如何使用 Red Hat Advanced Cluster Management for Kubernetes Governance 仪表板。

    备注:

    • 当策略传播到受管集群时,首先会复制到 hub 集群上的集群命名空间中,并使用 namespaceName.policyName 命名并标记。在创建策略时,请确保 namespaceName.policyName 的长度不超过 63 个字符,因为 Kubernetes 对标签值有限制。
    • 当您在 hub 集群中搜索策略时,您可能还会在受管集群命名空间中收到复制策略的名称。例如,如果您在 default 命名空间中搜索 policy-dhaz-cert,hub 集群中的以下策略名称可能也会出现在受管集群命名空间中: default.policy-dhaz-cert
  • 基于策略的监管框架: 支持根据与集群关联的属性(如一个地区)支持策略创建和部署到各种受管集群。以下是在开源社区中向集群部署策略的预定义策略和说明的示例。另外,当出现违反策略的情况时,可以将自动化配置为运行并采取用户选择的任何操作。
  • 开源社区:在 Red Hat Advanced Cluster Management 策略框架的基础上支持社区贡献。策略控制器和第三方策略也是 open-cluster-management/policy-collection 存储库的一部分。您可以使用 GitOps 贡献和部署策略。

2.1.2. 其他资源

2.2. 策略概述

要创建和管理策略,请获得可见性和修复以满足标准的配置,请使用 Red Hat Advanced Cluster Management for Kubernetes 安全策略框架。Kubernetes 自定义资源定义实例用于创建策略。除了受管集群命名空间外,您可以在 hub 集群上的任意命名空间中创建策略。如果在受管集群命名空间中创建策略,它将由 Red Hat Advanced Cluster Management 删除。每个 Red Hat Advanced Cluster Management 策略都可以被组织到一个或多个策略模板定义中。有关策略元素的详情,请查看此页面的 Policy YAML 表 部分。

您需要确保受管云环境满足针对 Kubernetes 集群上托管的工作负载的软件工程、安全工程、弹性、安全性和法规合规性的内部企业安全标准。

2.2.1. 先决条件

  • 每个策略都需要一个 放置 资源来定义策略文档应用到的集群,以及绑定 Red Hat Advanced Cluster Management for Kubernetes 策略的 PlacementBinding 资源。

    策略资源根据关联的 放置 定义应用到集群,您可以在其中查看与特定条件匹配的受管集群列表。与带有 environment=dev 标签的所有集群匹配的 放置资源 示例类似以下 YAML:

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-policy-role
    spec:
      predicates:
      - requiredClusterSelector:
        labelSelector:
        matchExpressions:
          - {key: environment, operator: In, values: ["dev"]}

    要了解更多有关放置和支持的标准的信息,请参阅集群生命周期文档中的 放置概述

  • 除了 Placement 资源外,还需要创建一个 PlacementBinding,将 放置资源 绑定到策略。与带有 environment=dev 标签的所有集群匹配的 放置资源 示例类似以下 YAML:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-role
    placementRef:
      name: placement-policy-role 1
      kind: Placement
      apiGroup: cluster.open-cluster-management.io
    subjects: 2
    - name: policy-role
      kind: Policy
      apiGroup: policy.open-cluster-management.io
    1
    如果使用上例,请确保更新 placementRef 部分中的 Placement 资源的名称,以匹配您的放置名称。
    2
    您必须更新 subjects 部分中的策略名称,以匹配您的策略名称。使用 oc apply -f resource.yaml -n namespace 命令应用 PlacementPlacementbinding 资源。确保策略、放置和放置绑定都在同一个命名空间中创建。
  • 要使用 放置资源,您必须将 ManagedClusterSet 资源绑定到带有 ManagedClusterSetBinding 资源的 Placement 资源的命名空间。如需了解更多详细信息 ,请参阅创建 ManagedClusterSetBinding 资源
  • 从控制台为策略创建放置资源时,放置容限的状态会自动添加到 放置资源 中。如需了解更多详细信息,请参阅 为放置添加容限

最佳实践 :在使用 Placement 资源时,使用命令行界面(CLI) 来更新策略。

在以下部分了解更多有关策略组件的详细信息:

2.2.2. 策略 YAML 结构

创建策略时,必须包含所需的参数字段和值。根据您的策略控制器,您可能需要包含其他可选字段和值。查看策略的以下 YAML 结构:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  disabled:
  remediationAction:
  dependencies:
  - apiVersion: policy.open-cluster-management.io/v1
    compliance:
    kind: Policy
    name:
    namespace:
  policy-templates:
    - objectDefinition:
        apiVersion:
        kind:
        metadata:
          name:
        spec:
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
bindingOverrides:
  remediationAction:
subFilter:
  name:
placementRef:
  name:
  kind: Placement
  apiGroup: cluster.open-cluster-management.io
subjects:
- name:
  kind:
  apiGroup:
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name:
spec:

2.2.3. 策略 YAML 表

查看以下策略参数描述表:

表 2.1. 参数表
字段可选或必需的描述

apiVersion

必填

将值设置为 policy.open-cluster-management.io/v1

kind

必填

将值设为 Policy 以表示策略类型。

metadata.name

必填

用于标识策略资源的名称。

metadata.annotations

选填

用于指定一组描述策略试图验证的标准集合的安全详情。这里介绍的所有注解都以逗号分隔的字符串表示。

:您可以在控制台中根据您在策略页面上为策略定义的标准和类别查看策略违反。

bindingOverrides.remediationAction

选填

当此参数设置为 enforce 时,它为您提供了覆盖配置策略相关 PlacementBinding 资源的补救操作。默认值为 null

subFilter

选填

将此参数设置为 restriction,以选择绑定策略的子集。默认值为 null

annotations.policy.open-cluster-management.io/standards

选填

与策略相关的安全标准的名称。例如,美国国家标准与技术研究院 (NIST) 和支付卡行业 (PCI)。

annotations.policy.open-cluster-management.io/categories

选填

安全控制类别是针对一个或多个标准的具体要求。例如,系统和信息完整性类别可能表明您的策略包含一个数据传输协议来保护个人信息,符合 HIPAA 和 PCI 标准的要求。

annotations.policy.open-cluster-management.io/controls

选填

正在接受检查的安全控制名称。例如,访问控制或系统及信息完整性。

spec.disabled

必填

将值设为 truefalsedisabled 参数提供启用和禁用策略的功能。

spec.remediationAction

选填

指定您的策略的修复。参数值是 enforceinform。如果指定,定义的 spec.remediationAction 值会覆盖 policy-templates 部分的子策略中定义的 remediationAction 参数。例如,如果 spec.remediationAction 值被设置为 enforce,则 policy-templates 部分中的 remediationAction 会在运行时被设置为 enforce

spec.copyPolicyMetadata

选填

指定在将策略复制到受管集群时是否应复制策略的标签和注解。如果设置为 true,策略的所有标签和注解都会复制到复制策略中。如果设置为 false,则只有策略框架特定的策略标签和注解被复制到复制策略中。

spec.dependencies

选填

用于创建与满足合规性的额外注意事项相关的依赖关系对象列表。

spec.policy-templates

必填

用于创建一个或多个应用到受管集群的策略。

spec.policy-templates.extraDependencies

选填

对于策略模板,这用于创建依赖项对象列表,以及满足合规性的额外注意事项。

spec.policy-templates.ignorePending

选填

用于在验证依赖项条件前将策略模板标记为合规。

重要:有些策略类型可能不支持 enforce 功能。

2.2.4. 策略示例文件

查看以下 YAML 文件,它是角色的配置策略:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name: policy-role
  annotations:
    policy.open-cluster-management.io/standards: NIST SP 800-53
    policy.open-cluster-management.io/categories: AC Access Control
    policy.open-cluster-management.io/controls: AC-3 Access Enforcement
    policy.open-cluster-management.io/description:
spec:
  remediationAction: inform
  disabled: false
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: policy-role-example
        spec:
          remediationAction: inform # the policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
          severity: high
          namespaceSelector:
            include: ["default"]
          object-templates:
            - complianceType: mustonlyhave # role definition should exact match
              objectDefinition:
                apiVersion: rbac.authorization.k8s.io/v1
                kind: Role
                metadata:
                  name: sample-role
                rules:
                  - apiGroups: ["extensions", "apps"]
                    resources: ["deployments"]
                    verbs: ["get", "list", "watch", "delete","patch"]
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  name: binding-policy-role
placementRef:
  name: placement-policy-role
  kind: Placement
  apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-role
  kind: Policy
  apiGroup: policy.open-cluster-management.io
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: placement-policy-role
spec:
  predicates:
  - requiredClusterSelector:
      labelSelector:
        matchExpressions:
        - {key: environment, operator: In, values: ["dev"]}

2.2.5. 其他资源

  • 请参阅策略控制器
  • 请参阅管理安全策略以创建和更新策略。您还可以启用和更新 Red Hat Advanced Cluster Management 策略控制器,以验证您的策略合规性。
  • 返回 监管 文档。

2.3. 策略控制器简介

策略控制器监控并报告集群是否合规。使用支持的策略模板应用由这些控制器管理的策略,使用 Red Hat Advanced Cluster Management for Kubernetes 策略框架。策略控制器管理 Kubernetes 自定义资源定义实例。

策略控制器检查策略违反情况,如果控制器支持强制功能,可以使集群状态兼容。查看以下主题以了解有关以下 Red Hat Advanced Cluster Management for Kubernetes 策略控制器的更多信息:

重要: 只有配置策略控制器策略支持 enforce 功能。当策略控制器不支持 enforce 功能时,您必须手动修复策略。

2.3.1. Kubernetes 配置策略控制器

配置策略控制器可用于配置任何 Kubernetes 资源,并在集群中应用安全策略。配置策略在 hub 集群上的策略的 policy-templates 字段中提供,并由监管框架传播到所选受管集群。

在配置策略中的 object-templates 数组中定义了 Kubernetes 对象(完整或部分),指示字段的配置策略控制器与受管集群上的对象进行比较。配置策略控制器与本地 Kubernetes API 服务器通信,以获取集群中的配置列表。

配置策略控制器是在安装过程中在受管集群上创建的。配置策略控制器支持 enforceInformOnly 功能,以便在配置策略不合规时修复。

当将配置策略的 remediationAction 设置为 enforce 时,控制器会将指定的配置应用到目标受管集群。

当配置策略的 remediationAction 设置为 InformOnly 时,父策略不会强制执行配置策略,即使父策略中的 remediationAction 被设置为 enforce

注: 指定没有名称的对象的配置策略只能是 inform

您还可以在配置策略中使用模板的值。如需更多信息,请参阅模板处理

如果您想将现有的 Kubernetes 清单放入到一个策略,则策略生成器是完成此任务的有用工具。

2.3.1.1. 配置策略示例
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: policy-config
spec:
  namespaceSelector:
    include: ["default"]
    exclude: []
    matchExpressions: []
    matchLabels: {}
  remediationAction: inform
  severity: low
  evaluationInterval:
    compliant:
    noncompliant:
  object-templates:
  - complianceType: musthave
    objectDefinition:
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod
      spec:
        containers:
        - image: pod-image
          name: pod-name
          ports:
          - containerPort: 80
  - complianceType: musthave
    objectDefinition:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: myconfig
      namespace: default
      data:
      testData: hello
    spec:
...
2.3.1.2. 配置策略 YAML 标
表 2.2. 参数表
字段可选或必需的描述

apiVersion

必填

将值设置为 policy.open-cluster-management.io/v1

kind

必填

将值设为 ConfigurationPolicy 以表示策略类型。

metadata.name

必填

策略的名称。

spec.namespaceSelector

没有指定命名空间的命名空间对象是必需的

决定对象要应用到的受管集群中的命名空间。includeexclude 参数接受根据名称包含和排除命名空间的文件路径表达式。matchExpressionsmatchLabels 参数指定要包含的命名空间。请参阅 Kubernetes 标签和选择器文档。生成的列表通过使用来自所有参数的结果的交集进行编译。

spec.remediationAction

必填

指定当策略不合规时要执行的操作。使用以下参数值: informInformOnlyenforce

spec.severity

必填

当策略不合规时,指定严重性。使用以下参数值:low, medium, high, 或 critical

spec.evaluationInterval.compliant

选填

用于定义策略处于合规状态时的评估频率。该值的格式必须是一个持续时间,它是带有时间单元后缀的数字序列。例如:12h30m5s 代表 12 小时、30 分钟和 5 秒。另外,也可以将其设定为 never,以便策略不会在合规集群中重新评估,除非更新了策略 spec

默认情况下,当 evaluationInterval.compliant is set 或 empty 时,配置策略评估之间的最短时间大约为 10 秒。如果配置策略控制器在受管集群中饱和,则这可能会更长。

spec.evaluationInterval.noncompliant

选填

用于定义策略处于不合规状态时的频率。与 evaluationInterval.compliant 参数类似,值必须采用持续时间格式,后者是时间后缀的序列。另外,也可以将其设定为 never,以便策略不会在不合规的集群中重新评估,除非更新了策略 spec

spec.object-templates

选填

用于控制器的 Kubernetes 对象数组(已定义或包含字段子集),以便与受管集群上的对象进行比较。注: 虽然 spec.object-templatesspec.object-templates-raw 列为可选,但必须设置两个参数字段中的一个。

spec.object-templates-raw

选填

用于使用原始 YAML 字符串设置对象模板。指定对象模板的条件,其中支持 if-else 语句和 range 函数等高级功能。例如,添加以下值以避免在 object-templates 定义中出现重复:

{{- if eq .metadata.name "policy-grc-your-meta-data-name" }} replicas: 2 {{- else }} replicas: 1 {{- end }}

注: 虽然 spec.object-templatesspec.object-templates-raw 列为可选,但必须设置两个参数字段中的一个。

spec.object-templates[].complianceType

必填

在受管集群中定义 Kubernetes 对象的所需状态。使用以下动词之一作为参数值:

  • mustonlyhave :代表必须存在带有在 objectDefinition 中定义的特定字段和值。
  • musthave :代表必须存在名称与 objectDefinition 中指定的相同字段的对象。在 object-template 中指定的任何现有字段都会被忽略。通常,会附加数组值。要修补数组的一个例外是,当项目包含一个与现有项匹配的 name 值时。如果要替换数组,请使用 mustonlyhave 合规类型来完全定义 objectDefinition
  • mustnothave :代表一个具有与 objectDefinition 中指定的字段相同的对象不能存在。

spec.object-templates[].metadataComplianceType

选填

当将清单的 metadata 部分与集群中的对象比较时覆盖 spec.object-templates[].complianceType ("musthave", "mustonlyhave")。默认没有设置,这不会为原始覆盖 complianceType

spec.object-templates[].recordDiff

选填

使用此参数指定是否和在哪里显示集群中的对象与策略中的 objectDefinition 之间的区别。支持以下选项:

  • 设置为 InStatus 以存储 ConfigurationPolicy 状态中的差别。
  • 设置为 Log,以记录控制器日志的区别。
  • 设置为 None 不会记录区别。

默认情况下,如果控制器没有检测区别中的敏感数据,则此参数被设置为 InStatus。否则,默认值为 None。如果检测到敏感数据,ConfigurationPolicy 状态会显示一条消息,以设置 recordDiff 以查看差异。

spec.object-templates[].recreateOption

选填

描述在需要更新时删除和重新创建对象的时间。当您将对象设置为 IfRequired 时,策略会在更新不可变字段时重新创建对象。当您将参数设置为 Always 时,策略会在任何更新上重新创建对象。当您将 remediationAction 设置为 inform 时,参数值 recreateOption 不会对对象产生影响。IfRequired 值对集群没有空运行更新支持的集群没有影响。默认值为 None

spec.object-templates[].objectDefinition

必填

用于控制器的 Kubernetes 对象数组(完整定义或包含字段子集),以便与受管集群上的对象进行比较。

spec.pruneObjectBehavior

选填

决定在从受管集群中删除策略时是否清理与策略相关的资源。

2.3.1.3. 其他资源

如需更多信息,请参阅以下主题:

2.3.2. 证书策略控制器

您可以使用证书策略控制器来检测即将过期的证书、持续时间(小时)或包含无法与指定模式匹配的 DNS 名称。您可以将证书策略添加到 hub 集群上的策略的 policy-templates 字段中,它使用监管框架传播到所选受管集群。有关 hub 集群策略的详情,请参阅策略概述文档。

通过更新控制器策略中的以下参数来配置和自定义证书策略控制器:

  • minimumDuration
  • minimumCADuration
  • maximumDuration
  • maximumCADuration
  • allowedSANPattern
  • disallowedSANPattern

由于以下情况之一,您的策略可能会变得不合规:

  • 当证书过期的时间少于最短持续时间,或超过最长时间时。
  • 当 DNS 名称与指定模式匹配时。

证书策略控制器是在受管集群上创建的。控制器与本地 Kubernetes API 服务器通信,以获取包含证书的 secret 列表,并确定所有不合规的证书。

证书策略控制器不支持 enforce 功能。

注: 证书策略控制器只自动在 tls.crt 密钥中的 secret 中查找证书。如果 secret 存储在不同的密钥下,请添加名为 certificate_key_name 的标签,并将值设为键,以便证书策略控制器知道查看不同的密钥。例如,如果 secret 包含存储在名为 sensor-cert.pem 的键中的证书,请将以下标签添加到 secret: certificate_key_name: sensor-cert.pem

2.3.2.1. 证书策略控制器 YAML 结构

查看证书策略的以下示例,并查看 YAML 表中的元素:

apiVersion: policy.open-cluster-management.io/v1
kind: CertificatePolicy
metadata:
  name: certificate-policy-example
spec:
  namespaceSelector:
    include: ["default"]
    exclude: []
    matchExpressions: []
    matchLabels: {}
  labelSelector:
    myLabelKey: myLabelValue
  remediationAction:
  severity:
  minimumDuration:
  minimumCADuration:
  maximumDuration:
  maximumCADuration:
  allowedSANPattern:
  disallowedSANPattern:
2.3.2.1.1. 证书策略控制器 YAML 表
表 2.3. 参数表
字段可选或必需的描述

apiVersion

必填

2.3.2.2. 

2.3.2.3. 

2.3.3. 

2.3.3.1. 

apiVersion: policy.open-cluster-management.io/v1beta1
kind: PolicySet
metadata:
  name: demo-policyset
spec:
  policies:
  - policy-demo

---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  name: demo-policyset-pb
placementRef:
  apiGroup: cluster.open-cluster-management.io
  kind: Placement
  name: demo-policyset-pr
subjects:
- apiGroup: policy.open-cluster-management.io
  kind: PolicySet
  name: demo-policyset
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: demo-policyset-pr
spec:
  predicates:
  - requiredClusterSelector:
      labelSelector:
        matchExpressions:
          - key: name
            operator: In
            values:
              - local-cluster
2.3.3.2. 

表 2.4. 
   

2.3.3.3. 
apiVersion: policy.open-cluster-management.io/v1beta1
kind: PolicySet
metadata:
  name: pci
  namespace: default
spec:
  description: Policies for PCI compliance
  policies:
  - policy-pod
  - policy-namespace
status:
  compliant: NonCompliant
  placement:
  - placementBinding: binding1
    placement: placement1
    policySet: policyset-ps
2.3.3.4. 

2.3.4. 

2.3.4.1. 
2.3.4.2. 
   

2.3.4.3. 

2.4. 

2.4.1. 

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: governance-policy-framework
  namespace: cluster1
  annotations:
    policy-evaluation-concurrency: "2"
spec:
  installNamespace: open-cluster-management-agent-addon

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: governance-policy-framework
  namespace: cluster1
  annotations:
    client-qps: "30"
    client-burst: "45"
spec:
  installNamespace: open-cluster-management-agent-addon

2.4.2. 

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: config-policy-controller
  namespace: cluster1
  annotations:
    policy-evaluation-concurrency: "5"
spec:
  installNamespace: open-cluster-management-agent-addon

2.4.3. 

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: config-policy-controller
  namespace: cluster1
  annotations:
    client-qps: "20"
    client-burst: "100"
spec:
  installNamespace: open-cluster-management-agent-addon

2.4.4. 

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: config-policy-controller
  namespace: cluster1
  annotations:
    log-level: "2"
spec:
  installNamespace: open-cluster-management-agent-addon

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: my-config-policy
spec:
  object-templates:
  - complianceType: musthave
    recordDiff: Log
    objectDefinition:
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: my-configmap
      data:
        fieldToUpdate: "2"

Logging the diff:
--- default/my-configmap : existing
+++ default/my-configmap : updated
@@ -2,3 +2,3 @@
 data:
-  fieldToUpdate: "1"
+  fieldToUpdate: "2"
 kind: ConfigMap

2.4.5. 

2.4.5.1. 

2.4.5.2. 

2.4.6. 

oc get -n local-cluster managedclusteraddon cert-policy-controller | grep -B4 'type: ManifestApplied'

 - lastTransitionTime: "2023-01-26T15:42:22Z"
    message: manifests of addon are applied successfully
    reason: AddonManifestApplied
    status: "True"
    type: ManifestApplied

2.4.7. 

2.5. 

2.5.1. 

2.5.2. 

  1. oc -n <PostgreSQL namespace> port-forward <PostgreSQL pod name> 5432:5432
  2. psql 'postgres://postgres:@127.0.0.1:5432/postgres'
  3. CREATE USER "rhacm-policy-compliance-history" WITH PASSWORD '<replace with password>';
    CREATE DATABASE "rhacm-policy-compliance-history" WITH OWNER="rhacm-policy-compliance-history";
  4. oc -n open-cluster-management create secret generic governance-policy-database \ 1
        --from-literal="user=rhacm-policy-compliance-history" \
        --from-literal="password=rhacm-policy-compliance-history" \
        --from-literal="host=<replace with host name of the Postgres server>" \ 2
        --from-literal="dbname=ocm-compliance-history" \
      --from-literal="sslmode=verify-full" \
        --from-file="ca=<replace>" 3
    1
    2
    3
  5. oc -n open-cluster-management label secret governance-policy-database cluster.open-cluster-management.io/backup=""
  6. python -c 'import urllib.parse; import sys; print(urllib.parse.quote(sys.argv[1]))' '$uper<Secr&t%>'
  7. curl -H "Authorization: Bearer $(oc whoami --show-token)" \
        "https://$(oc -n open-cluster-management get route governance-history-api -o jsonpath='{.spec.host}')/api/v1/compliance-events"
    • {"data":[],"metadata":{"page":1,"pages":0,"per_page":20,"total":0}}
    • {"message":"The database is unavailable"}
      {"message":"Internal Error"}
      1. oc -n open-cluster-management get events --field-selector reason=OCMComplianceEventsDBError
      2. oc -n open-cluster-management logs -l name=governance-policy-propagator -f
      3. 2024-03-05T12:17:14.500-0500	info	compliance-events-api	complianceeventsapi/complianceeventsapi_controller.go:261	The database connection failed: pq: password authentication failed for user "rhacm-policy-compliance-history"
    oc -n open-cluster-management edit secret governance-policy-database

2.5.3. 

  1. echo "https://$(oc -n open-cluster-management get route governance-history-api -o=jsonpath='{.spec.host}')"

    https://governance-history-api-open-cluster-management.apps.openshift.redhat.com
  2. apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: AddOnDeploymentConfig
    metadata:
      name: governance-policy-framework
      namespace: open-cluster-management
    spec:
      customizedVariables:
        - name: complianceHistoryAPIURL
          value: <replace with URL from previous command>
2.5.3.1. 

  1. oc edit ClusterManagementAddOn governance-policy-framework
  2.   - group: addon.open-cluster-management.io
        resource: addondeploymentconfigs
        defaultConfig:
          name: governance-policy-framework
          namespace: open-cluster-management
2.5.3.2. 

  1. oc -n <manage-cluster-namespace> edit ManagedClusterAddOn governance-policy-framework
  2. - group: addon.open-cluster-management.io
      resource: addondeploymentconfigs
      name: governance-policy-framework
      namespace: open-cluster-management
  3. oc -n open-cluster-management-agent-addon get deployment governance-policy-framework -o jsonpath='{.spec.template.spec.containers[1].args}'

    ["--enable-lease=true","--hub-cluster-configfile=/var/run/klusterlet/kubeconfig","--leader-elect=false","--log-encoder=console","--log-level=0","--v=-1","--evaluation-concurrency=2","--client-max-qps=30","--client-burst=45","--disable-spec-sync=true","--cluster-namespace=local-cluster","--compliance-api-url=https://governance-history-api-open-cluster-management.apps.openshift.redhat.com"]

    1. oc -n open-cluster-management-agent-addon logs deployment/governance-policy-framework -f
    2. 024-03-05T19:28:38.063Z        info    policy-status-sync      statussync/policy_status_sync.go:750    Failed to record the compliance event with the compliance API. Will requeue.       {"statusCode": 503, "message": ""}
    3. oc -n open-cluster-management edit AddOnDeploymentConfig governance-policy-framework

2.5.4. 

2.6. 

2.6.1. 

表 2.5. 
  

2.6.2. 

2.6.2.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          object-templates:
            - complianceType:
              objectDefinition:
                kind: Namespace
                apiVersion: v1
                metadata:
                  name:
                ...
2.6.2.2. 
   

2.6.2.3. 

2.6.3. 

2.6.3.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType:
              objectDefinition:
                apiVersion: v1
                kind: Pod
                metadata:
                  name:
                spec:
                  containers:
                  - image:
                    name:
                ...
2.6.3.2. 
表 2.6. 
   

2.6.3.3. 

2.6.4. 

2.6.4.1. 

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType: mustonlyhave
              objectDefinition:
                apiVersion: v1
                kind: LimitRange
                metadata:
                  name:
                spec:
                  limits:
                  - default:
                      memory:
                    defaultRequest:
                      memory:
                    type:
        ...
2.6.4.2. 
表 2.7. 
   

2.6.4.3. 

2.6.5. 

2.6.5.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType:
              objectDefinition:
                apiVersion: policy/v1beta1
                kind: PodSecurityPolicy
                metadata:
                  name:
                  annotations:
                    seccomp.security.alpha.kubernetes.io/allowedProfileNames:
                spec:
                  privileged:
                  allowPrivilegeEscalation:
                  allowedCapabilities:
                  volumes:
                  hostNetwork:
                  hostPorts:
                  hostIPC:
                  hostPID:
                  runAsUser:
                  seLinux:
                  supplementalGroups:
                  fsGroup:
                ...
2.6.5.2. 
表 2.8. 
   

2.6.5.3. 

violation - couldn't find mapping resource with kind PodSecurityPolicy, please check if you have CRD deployed

2.6.6. 

2.6.6.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType:
              objectDefinition:
                apiVersion: rbac.authorization.k8s.io/v1
                kind: Role
                metadata:
                  name:
                rules:
                  - apiGroups:
                    resources:
                    verbs:
                ...
2.6.6.2. 
表 2.9. 
   

2.6.6.3. 

2.6.7. 

2.6.7.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType:
              objectDefinition:
                kind: RoleBinding # role binding must exist
                apiVersion: rbac.authorization.k8s.io/v1
                metadata:
                  name:
                subjects:
                - kind:
                  name:
                  apiGroup:
                roleRef:
                  kind:
                  name:
                  apiGroup:
                ...
2.6.7.2. 
   

2.6.7.3. 

2.6.8. 

2.6.8.1. 
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          namespaceSelector:
            exclude:
            include:
            matchLabels:
            matchExpressions:
          object-templates:
            - complianceType:
              objectDefinition:
                apiVersion: security.openshift.io/v1
                kind: SecurityContextConstraints
                metadata:
                  name:
                allowHostDirVolumePlugin:
                allowHostIPC:
                allowHostNetwork:
                allowHostPID:
                allowHostPorts:
                allowPrivilegeEscalation:
                allowPrivilegedContainer:
                fsGroup:
                readOnlyRootFilesystem:
                requiredDropCapabilities:
                runAsUser:
                seLinuxContext:
                supplementalGroups:
                users:
                volumes:
                ...
2.6.8.2. 
   

2.6.8.3. 

2.6.9. 

2.6.9.1. 

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  namespace:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  remediationAction:
  disabled:
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name:
        spec:
          remediationAction:
          severity:
          object-templates:
            - complianceType:
              objectDefinition:
                apiVersion: config.openshift.io/v1
                kind: APIServer
                metadata:
                  name:
                spec:
                  encryption:
                ...
2.6.9.2. 
表 2.10. 
   

2.6.9.3. 

2.6.10. 

2.6.10.1. 

2.6.10.2. 

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: comp-operator-ns
spec:
  remediationAction: inform # will be overridden by remediationAction in parent policy
  severity: high
  object-templates:
    - complianceType: musthave
      objectDefinition:
        apiVersion: v1
        kind: Namespace
        metadata:
          name: openshift-compliance
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: comp-operator-operator-group
spec:
  remediationAction: inform # will be overridden by remediationAction in parent policy
  severity: high
  object-templates:
    - complianceType: musthave
      objectDefinition:
        apiVersion: operators.coreos.com/v1
        kind: OperatorGroup
        metadata:
          name: compliance-operator
          namespace: openshift-compliance
        spec:
          targetNamespaces:
            - openshift-compliance
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: comp-operator-subscription
spec:
  remediationAction: inform  # will be overridden by remediationAction in parent policy
  severity: high
  object-templates:
    - complianceType: musthave
      objectDefinition:
        apiVersion: operators.coreos.com/v1alpha1
        kind: Subscription
        metadata:
          name: compliance-operator
          namespace: openshift-compliance
        spec:
          channel: "4.x"
          installPlanApproval: Automatic
          name: compliance-operator
          source: redhat-operators
          sourceNamespace: openshift-marketplace

2.6.10.3. 

2.6.11. 

2.6.11.1. 

  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-suite-e8
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: musthave # this template checks if scan has completed by checking the status field
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ScanSettingBinding
            metadata:
              name: e8
              namespace: openshift-compliance
            profiles:
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-e8
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: rhcos4-e8
            settingsRef:
              apiGroup: compliance.openshift.io/v1alpha1
              kind: ScanSetting
              name: default
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-suite-e8
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: musthave # this template checks if scan has completed by checking the status field
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ComplianceSuite
            metadata:
              name: e8
              namespace: openshift-compliance
            status:
              phase: DONE
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-suite-e8-results
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: mustnothave # this template reports the results for scan suite: e8 by looking at ComplianceCheckResult CRs
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ComplianceCheckResult
            metadata:
              namespace: openshift-compliance
              labels:
                compliance.openshift.io/check-status: FAIL
                compliance.openshift.io/suite: e8

2.6.12. 

2.6.12.1. 

  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-cis-scan
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: musthave # this template creates ScanSettingBinding:cis
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ScanSettingBinding
            metadata:
              name: cis
              namespace: openshift-compliance
            profiles:
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-cis
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-cis-node
            settingsRef:
              apiGroup: compliance.openshift.io/v1alpha1
              kind: ScanSetting
              name: default
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-suite-cis
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: musthave # this template checks if scan has completed by checking the status field
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ComplianceSuite
            metadata:
              name: cis
              namespace: openshift-compliance
            status:
              phase: DONE
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: compliance-suite-cis-results
    spec:
      remediationAction: inform
      severity: high
      object-templates:
        - complianceType: mustnothave # this template reports the results for scan suite: cis by looking at ComplianceCheckResult CRs
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ComplianceCheckResult
            metadata:
              namespace: openshift-compliance
              labels:
                compliance.openshift.io/check-status: FAIL
                compliance.openshift.io/suite: cis

2.6.13. 

2.6.13.1. 

  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: policy-imagemanifestvuln-example-sub
    spec:
      remediationAction: enforce  # will be overridden by remediationAction in parent policy
      severity: high
      object-templates:
        - complianceType: musthave
          objectDefinition:
            apiVersion: operators.coreos.com/v1alpha1
            kind: Subscription
            metadata:
              name: container-security-operator
              namespace: openshift-operators
            spec:
              # channel: quay-v3.3 # specify a specific channel if desired
              installPlanApproval: Automatic
              name: container-security-operator
              source: redhat-operators
              sourceNamespace: openshift-marketplace
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: policy-imagemanifestvuln-status
    spec:
      remediationAction: inform  # will be overridden by remediationAction in parent policy
      severity: high
      object-templates:
        - complianceType: musthave
          objectDefinition:
            apiVersion: operators.coreos.com/v1alpha1
            kind: ClusterServiceVersion
            metadata:
              namespace: openshift-operators
            spec:
              displayName: Red Hat Quay Container Security Operator
            status:
              phase: Succeeded   # check the CSV status to determine if operator is running or not
  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: policy-imagemanifestvuln-example-imv
    spec:
      remediationAction: inform  # will be overridden by remediationAction in parent policy
      severity: high
      namespaceSelector:
        exclude: ["kube-*"]
        include: ["*"]
      object-templates:
        - complianceType: mustnothave # mustnothave any ImageManifestVuln object
          objectDefinition:
            apiVersion: secscan.quay.redhat.com/v1alpha1
            kind: ImageManifestVuln # checking for a Kind
2.6.13.2. 

2.6.14. 

2.6.14.1. 
2.6.14.2. 

表 2.11. 
   

2.6.14.3. 

2.7. 

2.7.1. 

2.7.2. 

2.7.3. 

2.7.4. 

2.7.4.1. 

2.7.4.2. 

2.7.4.3. 

  1. apiVersion: policy.open-cluster-management.io/v1beta1
    kind: PolicyAutomation
    metadata:
      name: policyname-policy-automation
    spec:
      automationDef:
        extra_vars:
          your_var: your_value
        name: Policy Compliance Template
        secret: ansible-tower
        type: AnsibleJob
      mode: disabled
      policyRef: policyname
    • metadata:
        annotations:
          policy.open-cluster-management.io/rerun: "true"

2.8. 

  • ca.crt: '{{ fromSecret "openshift-config" "ca-config-map-secret" "ca.crt"  | base64dec | toRawJson | toLiteral }}'

2.8.1. 

表 2.12. 
   

2.8.2. 

2.8.2.1. 

2.8.2.1.1. 

func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromsecret
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
      apiVersion: v1
      data: 1
        USER_NAME: YWRtaW4=
        PASSWORD: '{{ fromSecret "default" "localsecret" "PASSWORD" }}' 2
      kind: Secret 3
      metadata:
        name: demosecret
        namespace: test
      type: Opaque
  remediationAction: enforce
  severity: low
1
2
3

ca.crt: '{{ fromSecret "openshift-config" "ca-config-map-secret" "ca.crt"  | base64dec | toRawJson | toLiteral }}'
2.8.2.1.2. 

func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromcm-lookup
  namespace: test-templates
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
      kind: ConfigMap 1
      apiVersion: v1
      metadata:
        name: demo-app-config
        namespace: test
      data: 2
        app-name: sampleApp
        app-description: "this is a sample app"
        log-file: '{{ fromConfigMap "default" "logs-config" "log-file" }}' 3
        log-level: '{{ fromConfigMap "default" "logs-config" "log-level" }}' 4
  remediationAction: enforce
  severity: low
1
2
3
4
2.8.2.1.3. 

func fromClusterClaim (clusterclaimName string) (dataValue string, err Error)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-clusterclaims 1
  namespace: default
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: sample-app-config
        namespace: default
      data: 2
        platform: '{{ fromClusterClaim "platform.open-cluster-management.io" }}' 3
        product: '{{ fromClusterClaim "product.open-cluster-management.io" }}'
        version: '{{ fromClusterClaim "version.openshift.io" }}'
  remediationAction: enforce
  severity: low
1
2
3
2.8.2.1.4. 

func lookup (apiversion string, kind string, namespace string, name string, labelselector ...string) (value string, err Error)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-lookup
  namespace: test-templates
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: demo-app-config
        namespace: test
      data: 1
        app-name: sampleApp
        app-description: "this is a sample app"
        metrics-url: | 2
          http://{{ (lookup "v1" "Service" "default" "metrics").spec.clusterIP }}:8080
  remediationAction: enforce
  severity: low
1
2
2.8.2.1.5. 

func base64enc (data string) (enc-data string)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromsecret
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    data:
      USER_NAME: '{{ fromConfigMap "default" "myconfigmap" "admin-user" | base64enc }}'
2.8.2.1.6. 

func base64dec (enc-data string) (data string)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromsecret
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    data:
      app-name: |
         "{{ ( lookup "v1"  "Secret" "testns" "mytestsecret") .data.appname ) | base64dec }}"
2.8.2.1.7. 

func indent (spaces  int,  data string) (padded-data string)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromsecret
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    data:
      Ca-cert:  |
        {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls"  ).data  "ca.pem"  ) |  base64dec | indent 4  }}
2.8.2.1.8. 

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-fromsecret
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    data:
      Ca-cert:  |
        {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls"  ).data  "ca.pem"  ) |  base64dec | autoindent }}
2.8.2.1.9. 

func toInt (input interface{}) (output int)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-template-function
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    spec:
      vlanid:  |
        {{ (fromConfigMap "site-config" "site1" "vlan")  | toInt }}
2.8.2.1.10. 

func toBool (input string) (output bool)

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-template-function
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    spec:
      enabled:  |
        {{ (fromConfigMap "site-config" "site1" "enabled")  | toBool }}
2.8.2.1.11. 

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: demo-template-function
  namespace: test
spec:
  namespaceSelector:
    exclude:
    - kube-*
    include:
    - default
  object-templates:
  - complianceType: musthave
    objectDefinition:
    ...
    spec:
      enabled:  |
        {{hub (lookup "v1" "Secret" "default" "my-hub-secret").data.message | protect hub}}

2.8.2.1.12. 

key: '{{ "[\"10.10.10.10\", \"1.1.1.1\"]" | toLiteral }}'

key: ["10.10.10.10", "1.1.1.1"]
2.8.2.1.13. 

complianceType: musthave
      objectDefinition:
        apiVersion: v1
        kind: Secret
        metadata:
          name: my-secret-copy
        data: '{{ copySecretData "default" "my-secret" }}' 1
1
2.8.2.1.14. 

complianceType: musthave
      objectDefinition:
        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: my-secret-copy
        data: '{{ copyConfigMapData "default" "my-configmap" }}'
2.8.2.2. 

表 2.13. 
  

2.8.2.3. 

2.8.3. 

2.8.3.1. 

2.8.3.2. 

object-templates-raw: |
  {{- range (lookup "v1" "ConfigMap" "default" "").items }}
  {{- if eq .data.name "Sea Otter" }}
  - complianceType: musthave
    objectDefinition:
      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: {{ .metadata.name }}
        namespace: {{ .metadata.namespace }}
        labels:
          species-category: mammal
  {{- end }}
  {{- end }}

apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
  name: create-infra-machineset
spec:
  remediationAction: enforce
  severity: low
  object-templates-raw: |
    {{- /* Specify the parameters needed to create the MachineSet  */ -}}
    {{- $machineset_role := "infra" }}
    {{- $region := "ap-southeast-1" }}
    {{- $zones := list "ap-southeast-1a" "ap-southeast-1b" "ap-southeast-1c" }}
    {{- $infrastructure_id := (lookup "config.openshift.io/v1" "Infrastructure" "" "cluster").status.infrastructureName }}
    {{- $worker_ms := (index (lookup "machine.openshift.io/v1beta1" "MachineSet" "openshift-machine-api" "").items 0) }}
    {{- /* Generate the MachineSet for each zone as specified  */ -}}
    {{- range $zone := $zones }}
    - complianceType: musthave
      objectDefinition:
        apiVersion: machine.openshift.io/v1beta1
        kind: MachineSet
        metadata:
          labels:
            machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }}
          name: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }}
          namespace: openshift-machine-api
        spec:
          replicas: 1
          selector:
            matchLabels:
              machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }}
              machine.openshift.io/cluster-api-machineset: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }}
          template:
            metadata:
              labels:
                machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }}
                machine.openshift.io/cluster-api-machine-role: {{ $machineset_role }}
                machine.openshift.io/cluster-api-machine-type: {{ $machineset_role }}
                machine.openshift.io/cluster-api-machineset: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }}
            spec:
              metadata:
                labels:
                  node-role.kubernetes.io/{{ $machineset_role }}: ""
              taints:
                - key: node-role.kubernetes.io/{{ $machineset_role }}
                  effect: NoSchedule
              providerSpec:
                value:
                  ami:
                    id: {{ $worker_ms.spec.template.spec.providerSpec.value.ami.id }}
                  apiVersion: awsproviderconfig.openshift.io/v1beta1
                  blockDevices:
                    - ebs:
                        encrypted: true
                        iops: 2000
                        kmsKey:
                          arn: ''
                        volumeSize: 500
                        volumeType: io1
                  credentialsSecret:
                    name: aws-cloud-credentials
                  deviceIndex: 0
                  instanceType: {{ $worker_ms.spec.template.spec.providerSpec.value.instanceType }}
                  iamInstanceProfile:
                    id: {{ $infrastructure_id }}-worker-profile
                  kind: AWSMachineProviderConfig
                  placement:
                    availabilityZone: {{ $zone }}
                    region: {{ $region }}
                  securityGroups:
                    - filters:
                        - name: tag:Name
                          values:
                            - {{ $infrastructure_id }}-worker-sg
                  subnet:
                    filters:
                      - name: tag:Name
                        values:
                          - {{ $infrastructure_id }}-private-{{ $zone }}
                  tags:
                    - name: kubernetes.io/cluster/{{ $infrastructure_id }}
                      value: owned
                  userDataSecret:
                    name: worker-user-data
    {{- end }}
2.8.3.3. 

2.8.3.4. 

2.9. 

2.9.1. 

2.9.1.1. 

  1. oc create -f policy.yaml -n <policy-namespace>
  2. apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy1
    spec:
      remediationAction: "enforce" # or inform
      disabled: false # or true
      namespaceSelector:
        include:
        - "default"
        - "my-namespace"
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: operator
              # namespace: # will be supplied by the controller via the namespaceSelector
            spec:
              remediationAction: "inform"
              object-templates:
              - complianceType: "musthave" # at this level, it means the role must exist and must have the following rules
                apiVersion: rbac.authorization.k8s.io/v1
                kind: Role
                metadata:
                  name: example
                objectDefinition:
                  rules:
                    - complianceType: "musthave" # at this level, it means if the role exists the rule is a musthave
                      apiGroups: ["extensions", "apps"]
                      resources: ["deployments"]
                      verbs: ["get", "list", "watch", "create", "delete","patch"]
  3. apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding1
    placementRef:
      name: placement1
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
    subjects:
    - name: policy1
      apiGroup: policy.open-cluster-management.io
      kind: Policy
2.9.1.1.1. 

  1. oc get policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> -o yaml
  2. oc describe policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>
2.9.1.2. 

  1. apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-pod
      annotations:
        policy.open-cluster-management.io/categories: 'SystemAndCommunicationsProtections,SystemAndInformationIntegrity'
        policy.open-cluster-management.io/controls: 'control example'
        policy.open-cluster-management.io/standards: 'NIST,HIPAA'
        policy.open-cluster-management.io/description:
    spec:
      complianceType: musthave
      namespaces:
        exclude: ["kube*"]
        include: ["default"]
        pruneObjectBehavior: None
      object-templates:
      - complianceType: musthave
        objectDefinition:
          apiVersion: v1
          kind: Pod
          metadata:
            name: pod1
          spec:
            containers:
            - name: pod-name
              image: 'pod-image'
              ports:
              - containerPort: 80
      remediationAction: enforce
      disabled: false

    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-pod
    placementRef:
      name: placement-pod
      kind: Placement
      apiGroup: cluster.open-cluster-management.io
    subjects:
    - name: policy-pod
      kind: Policy
      apiGroup: policy.open-cluster-management.io

    apiVersion: cluster.open-cluster-management.io/v1beta1
     kind: Placement
     metadata:
       name: placement-pod
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchLabels:
              cloud: "IBM"
2.9.1.2.1. 

2.9.1.3. 

oc apply -f <policyset-filename>
2.9.1.4. 

2.9.2. 

2.9.2.1. 
  1. oc edit policysets <your-policyset-name>
oc apply -f <your-added-policy.yaml>

2.9.2.2. 
2.9.2.3. 

2.9.3. 

    1. oc delete policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>

2.9.3.1. 

2.9.4. 

2.9.5. 

2.9.6. 

2.9.6.1. 

2.9.6.1.1. 

  1. oc create -f configpolicy-1.yaml

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-1
      namespace: my-policies
    policy-templates:
    - apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      metadata:
        name: mustonlyhave-configuration
      spec:
        namespaceSelector:
          include: ["default"]
          exclude: ["kube-system"]
        remediationAction: inform
        disabled: false
        complianceType: mustonlyhave
        object-templates:
  2. oc apply -f <policy-file-name>  --namespace=<namespace>
  3. oc get policies.policy.open-cluster-management.io --namespace=<namespace>

2.9.6.1.2. 

  1. oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yaml
  2. oc describe policies.policy.open-cluster-management.io <name> -n <namespace>
2.9.6.1.3. 

2.9.6.1.4. 

2.9.6.2. 

2.9.6.2.1. 

2.9.6.3. 

    1. oc delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>
    oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace>

2.9.6.4. 

2.9.7. 

  1.   

2.9.8. 

2.9.8.1. 

  1. apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-configure-subscription-admin-hub
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-configure-subscription-admin-hub
            spec:
              remediationAction: inform
              severity: low
              object-templates:
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: rbac.authorization.k8s.io/v1
                    kind: ClusterRole
                    metadata:
                      name: open-cluster-management:subscription-admin
                    rules:
                    - apiGroups:
                      - app.k8s.io
                      resources:
                      - applications
                      verbs:
                      - '*'
                    - apiGroups:
                      - apps.open-cluster-management.io
                      resources:
                      - '*'
                      verbs:
                      - '*'
                    - apiGroups:
                      - ""
                      resources:
                      - configmaps
                      - secrets
                      - namespaces
                      verbs:
                      - '*'
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: rbac.authorization.k8s.io/v1
                    kind: ClusterRoleBinding
                    metadata:
                      name: open-cluster-management:subscription-admin
                    roleRef:
                      apiGroup: rbac.authorization.k8s.io
                      kind: ClusterRole
                      name: open-cluster-management:subscription-admin
                    subjects:
                    - apiGroup: rbac.authorization.k8s.io
                      kind: User
                      name: kube:admin
                    - apiGroup: rbac.authorization.k8s.io
                      kind: User
                      name: system:admin
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-configure-subscription-admin-hub
    placementRef:
      name: placement-policy-configure-subscription-admin-hub
      kind: Placement
      apiGroup: cluster.open-cluster-management.io
    subjects:
    - name: policy-configure-subscription-admin-hub
      kind: Policy
      apiGroup: policy.open-cluster-management.io
    ---
    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-policy-configure-subscription-admin-hub
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions:
            - {key: name, operator: In, values: ["local-cluster"]}
  2. oc apply -f policy-configure-subscription-admin-hub.yaml
  3. apiVersion: cluster.open-cluster-management.io/v1beta2
    kind: ManagedClusterSetBinding
    metadata:
        name: default
        namespace: policies
    spec:
        clusterSet: default
  4. oc apply -f managed-cluster.yaml

2.9.8.2. 
  1. kustomize build --enable-alpha-plugins  | oc apply -f -

2.9.8.3. 

2.9.9. 

2.9.9.1. 

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name: install-quay
  namespace: open-cluster-management-global-set
spec:
  disabled: false
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1beta1
        kind: OperatorPolicy
        metadata:
          name: install-quay
        spec:
          remediationAction: enforce
          severity: critical
          complianceType: musthave
          upgradeApproval: None
          subscription:
            channel: stable-3.11
            name: quay-operator
            source: redhat-operators
            sourceNamespace: openshift-marketplace

oc -n <managed cluster namespace> get operatorpolicy install-quay
2.9.9.2. 

2.10. 

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  annotations:
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    policy.open-cluster-management.io/standards: NIST SP 800-53
    policy.open-cluster-management.io/description:
  name: moderate-compliance-scan
  namespace: default
spec:
  dependencies: 1
  - apiVersion: policy.open-cluster-management.io/v1
    compliance: Compliant
    kind: Policy
    name: upstream-compliance-operator
    namespace: default
  disabled: false
  policy-templates:
  - extraDependencies: 2
    - apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      name: scan-setting-prerequisite
      compliance: Compliant
    ignorePending: false 3
    objectDefinition:
      apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      metadata:
        name: moderate-compliance-scan
      spec:
        object-templates:
        - complianceType: musthave
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ScanSettingBinding
            metadata:
              name: moderate
              namespace: openshift-compliance
            profiles:
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-moderate
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-moderate-node
            settingsRef:
              apiGroup: compliance.openshift.io/v1alpha1
              kind: ScanSetting
              name: default
        remediationAction: enforce
        severity: low
1
2
3

2.11. 

第 3 章 

3.1. 

3.2. 

3.3. 

3.3.1. 

3.3.2. 

apiVersion: operator.gatekeeper.sh/v1alpha1
kind: Gatekeeper
metadata:
  name: gatekeeper
spec:
  audit:
    replicas: 1
    auditEventsInvolvedNamespace: Enabled 1
    logLevel: DEBUG
    auditInterval: 10s
    constraintViolationLimit: 55
    auditFromCache: Enabled
    auditChunkSize: 66
    emitAuditEvents: Enabled
    containerArguments: 2
    - name: ""
      value: ""
    resources:
      limits:
        cpu: 500m
        memory: 150Mi
      requests:
        cpu: 500m
        memory: 130Mi
  validatingWebhook: Enabled
  mutatingWebhook: Enabled
  webhook:
    replicas: 3
    emitAdmissionEvents: Enabled
    admissionEventsInvolvedNamespace: Enabled 3
    disabledBuiltins:
     - http.send
    operations: 4
     - "CREATE"
     - "UPDATE"
     - "CONNECT"
    failurePolicy: Fail
    containerArguments: 5
    - name: ""
      value: ""
    resources:
      limits:
        cpu: 480m
        memory: 140Mi
      requests:
        cpu: 400m
        memory: 120Mi
  nodeSelector:
    region: "EMEA"
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              auditKey: "auditValue"
          topologyKey: topology.kubernetes.io/zone
  tolerations:
    - key: "Example"
      operator: "Exists"
      effect: "NoSchedule"
  podAnnotations:
    some-annotation: "this is a test"
    other-annotation: "another test"
  config: 6
    matches:
      - excludedNamespaces: ["test-*", "my-namespace"]
        processes: ["*"]
    disableDefaultMatches: false 7
1
3
4
2 5
6
7

3.3.3. 

  1. apiVersion: operator.gatekeeper.sh/v1alpha1
    kind: Gatekeeper
    metadata:
      name: gatekeeper
    spec:
      audit:
        replicas: 2
        logLevel: DEBUG
        auditFromCache: Automatic
  2. oc get configs.config.gatekeeper.sh config -n openshift-gatekeeper-system

    apiVersion: config.gatekeeper.sh/v1alpha1
    kind: Config
    metadata:
     name: config
     namespace: "openshift-gatekeeper-system"
    spec:
     sync:
       syncOnly:
       - group: ""
         version: "v1"
         kind: "Namespace"
       - group: ""
         version: "v1"
         kind: "Pod"

oc explain gatekeeper.spec.audit.auditFromCache

3.4. 

3.4.1. 

3.4.2. 

3.4.2.1. 

3.4.3. 

3.4.4. 

3.4.5. 

  1. oc delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>
  2. oc get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>

3.4.6. 

3.4.6.1. 

3.4.6.2. 

3.4.6.3. 

3.4.7. 

3.5. 

  • apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: require-gatekeeper-labels-on-ns
    spec:
      remediationAction: inform 1
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: templates.gatekeeper.sh/v1beta1
            kind: ConstraintTemplate
            metadata:
              name: k8srequiredlabels
              annotations:
                policy.open-cluster-management.io/severity: low 2
            spec:
              crd:
                spec:
                  names:
                    kind: K8sRequiredLabels
                  validation:
                    openAPIV3Schema:
                      properties:
                        labels:
                          type: array
                          items: string
              targets:
                - target: admission.k8s.gatekeeper.sh
                  rego: |
                    package k8srequiredlabels
                    violation[{"msg": msg, "details": {"missing_labels": missing}}] {
                      provided := {label | input.review.object.metadata.labels[label]}
                      required := {label | label := input.parameters.labels[_]}
                      missing := required - provided
                      count(missing) > 0
                      msg := sprintf("you must provide labels: %v", [missing])
                    }
        - objectDefinition:
            apiVersion: constraints.gatekeeper.sh/v1beta1
            kind: K8sRequiredLabels
            metadata:
              name: ns-must-have-gk
              annotations:
                policy.open-cluster-management.io/severity: low 3
            spec:
              enforcementAction: dryrun
              match:
                kinds:
                  - apiGroups: [""]
                    kinds: ["Namespace"]
              parameters:
                labels: ["gatekeeper"]
    1
    2 3

  • apiVersion: policy.open-cluster-management.io/v1
    kind: ConfigurationPolicy
    metadata:
      name: policy-gatekeeper-admission
    spec:
      remediationAction: inform 1
      severity: low
      object-templates:
        - complianceType: mustnothave
          objectDefinition:
            apiVersion: v1
            kind: Event
            metadata:
              namespace: openshift-gatekeeper-system 2
              annotations:
                constraint_action: deny
                constraint_kind: K8sRequiredLabels
                constraint_name: ns-must-have-gk
                event_type: violation
    1
    2

3.5.1. 

第 4 章 

4.1. 

4.1.1. 

4.1.2. 

  • generators:
      - policy-generator-config.yaml 1
    1
  • apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: config-data-policies
    policyDefaults:
      namespace: policies
      policySets: []
    policies:
      - name: config-data
        manifests:
          - path: configmap.yaml 1
    1
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-config
      namespace: default
    data:
      key1: value1
      key2: value2
    • object-templates-raw: |
        {{- range (lookup "v1" "ConfigMap" "my-namespace" "").items }}
        - complianceType: musthave
          objectDefinition:
            kind: ConfigMap
            apiVersion: v1
            metadata:
              name: {{ .metadata.name }}
              namespace: {{ .metadata.namespace }}
              labels:
                i-am-from: template
        {{- end }}
    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: config-data-policies
    policyDefaults:
      namespace: policies
      policySets: []
    policies:
      - name: config-data
        manifests:
          - path: manifest.yaml 1
    1
  • apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-config-data
      namespace: policies
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions: []
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-config-data
      namespace: policies
    placementRef:
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
      name: placement-config-data
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: config-data
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/description:
      name: config-data
      namespace: policies
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: config-data
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                data:
                  key1: value1
                  key2: value2
                kind: ConfigMap
                metadata:
                  name: my-config
                  namespace: default
            remediationAction: inform
            severity: low

4.1.3. 

表 4.1. 
   

4.1.4. 

4.2. 

  1. apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-compliance
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: compliance-operator
      namespace: openshift-compliance
    spec:
      targetNamespaces:
        - openshift-compliance
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: compliance-operator
      namespace: openshift-compliance
    spec:
      channel: release-0.1
      name: compliance-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
  2. apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: install-compliance-operator
    policyDefaults:
      namespace: policies
      placement:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - "OpenShift"
    policies:
      - name: install-compliance-operator
        manifests:
          - path: compliance-operator.yaml
  3. generators:
      - policy-generator-config.yaml

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-install-compliance-operator
      namespace: policies
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions:
            - key: vendor
              operator: In
              values:
              - OpenShift
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-compliance-operator
      namespace: policies
    placementRef:
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
      name: placement-install-compliance-operator
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: install-compliance-operator
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/description:
      name: install-compliance-operator
      namespace: policies
    spec:
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: install-compliance-operator
            spec:
              object-templates:
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: v1
                    kind: Namespace
                    metadata:
                      name: openshift-compliance
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: operators.coreos.com/v1alpha1
                    kind: Subscription
                    metadata:
                      name: compliance-operator
                      namespace: openshift-compliance
                    spec:
                      channel: release-0.1
                      name: compliance-operator
                      source: redhat-operators
                      sourceNamespace: openshift-marketplace
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: operators.coreos.com/v1
                    kind: OperatorGroup
                    metadata:
                      name: compliance-operator
                      namespace: openshift-compliance
                    spec:
                      targetNamespaces:
                        - compliance-operator
              remediationAction: enforce
              severity: low

法律通告

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.