3.3. 使用特殊资源 Operator


特殊资源 Operator (SRO) 用于管理驱动程序容器的构建和部署。构建和部署容器所需的对象可以在 Helm Chart 中定义。

本节中的示例使用 simple-kmod SpecialResource 对象来指向所创建的 ConfigMap 对象来存储 Helm chart。

在这个示例中,simple-kmod 内核模块显示特殊资源 Operator(SRO)如何管理驱动程序容器。容器在存储在配置映射中的 Helm Chart 模板中定义。

先决条件

  • 有一个正在运行的 OpenShift Container Platform 集群。
  • 您可以将集群的 Image Registry Operator 状态设置为 Managed
  • 已安装 OpenShift CLI(oc)。
  • 以具有 cluster-admin 权限的用户身份登录 OpenShift CLI。
  • 已安装 Node Feature Discovery (NFD) Operator。
  • 已安装 SRO。
  • 已安装 Helm CLI (helm)。

流程

  1. 要创建 simple-kmod SpecialResource 对象,请定义用于构建镜像的镜像流和构建配置,以及用于运行容器的服务帐户、角色、角色绑定和守护进程集。需要服务帐户、角色和角色绑定来运行具有特权安全上下文的守护进程集,以便加载内核模块。

    1. 创建 templates 目录,并更改到此目录:

      $ mkdir -p chart/simple-kmod-0.0.1/templates
      Copy to Clipboard Toggle word wrap
      $ cd chart/simple-kmod-0.0.1/templates
      Copy to Clipboard Toggle word wrap
    2. 将镜像流和构建配置的 YAML 模板保存到 templates 目录中的 0000-buildconfig.yaml 中:

      apiVersion: image.openshift.io/v1
      kind: ImageStream
      metadata:
        labels:
          app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 
      1
      
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 
      2
      
      spec: {}
      ---
      apiVersion: build.openshift.io/v1
      kind: BuildConfig
      metadata:
        labels:
          app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}}  
      3
      
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 
      4
      
        annotations:
          specialresource.openshift.io/wait: "true"
          specialresource.openshift.io/driver-container-vendor: simple-kmod
          specialresource.openshift.io/kernel-affine: "true"
      spec:
        nodeSelector:
          node-role.kubernetes.io/worker: ""
        runPolicy: "Serial"
        triggers:
          - type: "ConfigChange"
          - type: "ImageChange"
        source:
          git:
            ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}}
            uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}}
          type: Git
        strategy:
          dockerStrategy:
            dockerfilePath: Dockerfile.SRO
            buildArgs:
              - name: "IMAGE"
                value: {{ .Values.driverToolkitImage  }}
              {{- range $arg := .Values.buildArgs }}
              - name: {{ $arg.name }}
                value: {{ $arg.value }}
              {{- end }}
              - name: KVER
                value: {{ .Values.kernelFullVersion }}
        output:
          to:
            kind: ImageStreamTag
            name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} 
      5
      Copy to Clipboard Toggle word wrap
      1 2 3 4 5
      {{.Values.specialresource.metadata.name}} 等模板由 SRO 填写,具体基于 SpecialResource CR 中的字段和 Operator 已知的变量,如 {{.Values.KernelFullVersion}}
    3. templates 目录中的 RBAC 资源和守护进程设置的以下 YAML 模板保存为 1000-driver-container.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      rules:
      - apiGroups:
        - security.openshift.io
        resources:
        - securitycontextconstraints
        verbs:
        - use
        resourceNames:
        - privileged
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      subjects:
      - kind: ServiceAccount
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
        namespace: {{.Values.specialresource.spec.namespace}}
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        labels:
          app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
        name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
        annotations:
          specialresource.openshift.io/wait: "true"
          specialresource.openshift.io/state: "driver-container"
          specialresource.openshift.io/driver-container-vendor: simple-kmod
          specialresource.openshift.io/kernel-affine: "true"
          specialresource.openshift.io/from-configmap: "true"
      spec:
        updateStrategy:
          type: OnDelete
        selector:
          matchLabels:
            app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
        template:
          metadata:
            labels:
              app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
          spec:
            priorityClassName: system-node-critical
            serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
            serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
            containers:
            - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}}
              name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
              imagePullPolicy: Always
              command: ["/sbin/init"]
              lifecycle:
                preStop:
                  exec:
                    command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"]
              securityContext:
                privileged: true
            nodeSelector:
              node-role.kubernetes.io/worker: ""
              feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}"
      Copy to Clipboard Toggle word wrap
    4. 进入 chart/simple-kmod-0.0.1 目录:

      $ cd ..
      Copy to Clipboard Toggle word wrap
    5. chart/simple-kmod-0.0.1 目录中,将 Chart 的以下 YAML 保存为 Chart.yaml

      apiVersion: v2
      name: simple-kmod
      description: Simple kmod will deploy a simple kmod driver-container
      icon: https://avatars.githubusercontent.com/u/55542927
      type: application
      version: 0.0.1
      appVersion: 1.0.0
      Copy to Clipboard Toggle word wrap
  2. Chart 目录中,使用 helm package 命令创建 chart:

    $ helm package simple-kmod-0.0.1/
    Copy to Clipboard Toggle word wrap

    输出示例

    Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz
    Copy to Clipboard Toggle word wrap

  3. 创建配置映射以存储 chart 文件:

    1. 为配置映射文件创建目录:

      $ mkdir cm
      Copy to Clipboard Toggle word wrap
    2. 将 Helm Chart 复制到 cm 目录中:

      $ cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz
      Copy to Clipboard Toggle word wrap
    3. 创建一个索引文件,指定包含 Helm Chart 的 Helm 仓库:

      $ helm repo index cm --url=cm://simple-kmod/simple-kmod-chart
      Copy to Clipboard Toggle word wrap
    4. 为 Helm Chart 中定义的对象创建一个命名空间:

      $ oc create namespace simple-kmod
      Copy to Clipboard Toggle word wrap
    5. 创建配置映射对象:

      $ oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod
      Copy to Clipboard Toggle word wrap
  4. 使用以下 SpecialResource 清单,使用您在配置映射中创建的 Helm Chart 部署 simple-kmod 对象。将此 YAML 保存为 simple-kmod-configmap.yaml

    apiVersion: sro.openshift.io/v1beta1
    kind: SpecialResource
    metadata:
      name: simple-kmod
    spec:
      #debug: true 
    1
    
      namespace: simple-kmod
      chart:
        name: simple-kmod
        version: 0.0.1
        repository:
          name: example
          url: cm://simple-kmod/simple-kmod-chart 
    2
    
      set:
        kind: Values
        apiVersion: sro.openshift.io/v1beta1
        kmodNames: ["simple-kmod", "simple-procfs-kmod"]
        buildArgs:
        - name: "KMODVER"
          value: "SRO"
      driverContainer:
        source:
          git:
            ref: "master"
            uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
    Copy to Clipboard Toggle word wrap
    1
    可选:取消注释 #debug: true 行,使 chart 中的 YAML 文件完整显示在 Operator 日志中,并验证日志是否已正确创建并模板化。
    2
    spec.chart.repository.url 字段指示 SRO 在配置映射中查找 chart。
  5. 在命令行中创建 SpecialResource 文件:

    $ oc create -f simple-kmod-configmap.yaml
    Copy to Clipboard Toggle word wrap
注意

如果要从节点中删除 simple-kmod 内核模块,请使用 oc delete 命令删除 simple-kmod SpecialResource API 对象。删除驱动程序容器 pod 时,内核模块会被卸载。

验证

simple-kmod 资源部署在 simple-kmod 命名空间中,如对象清单中指定的。片刻之后,simple-kmod 驱动程序容器的构建 pod 开始运行。构建在几分钟后完成,然后驱动程序容器容器集开始运行。

  1. 使用 oc get pods 命令显示构建 pod 的状态:

    $ oc get pods -n simple-kmod
    Copy to Clipboard Toggle word wrap

    输出示例

    NAME                                                  READY   STATUS      RESTARTS   AGE
    simple-kmod-driver-build-12813789169ac0ee-1-build     0/1     Completed   0          7m12s
    simple-kmod-driver-container-12813789169ac0ee-mjsnh   1/1     Running     0          8m2s
    simple-kmod-driver-container-12813789169ac0ee-qtkff   1/1     Running     0          8m2s
    Copy to Clipboard Toggle word wrap

  2. 使用 oc logs 命令以及从上述 oc get pods 命令获取的构建 pod 名称,以显示 simple-kmod 驱动程序容器镜像构建的日志:

    $ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod
    Copy to Clipboard Toggle word wrap
  3. 要验证是否载入了 simple-kmod 内核模块,请在上面的 oc get pods 命令返回的一个驱动程序容器 pod 中执行 lsmod 命令:

    $ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple
    Copy to Clipboard Toggle word wrap

    输出示例

    simple_procfs_kmod     16384  0
    simple_kmod            16384  0
    Copy to Clipboard Toggle word wrap

提示

sro_kind_completed_info SRO Prometheus 指标提供有关所部署不同对象的状态的信息,这可用于对 SRO CR 安装进行故障排除。SRO 还提供其他类型的指标,可用于监视环境的健康状况。

您可以在 Red Hat Advanced Cluster Management(RHACM)的 hub-and-spoke 部署中使用 Special Resource Operator(SRO)将 hub 集群连接到一个或多个受管集群。

这个示例步骤演示了如何在 hub 中构建驱动程序容器。SRO 监视 hub 集群资源来识别 OpenShift Container Platform 版本的 helm chart,用来创建它要提供给 spoke 的资源。

先决条件

  • 有一个正在运行的 OpenShift Container Platform 集群。
  • 已安装 OpenShift CLI(oc)。
  • 以具有 cluster-admin 权限的用户身份登录 OpenShift CLI。
  • 已安装 SRO。
  • 已安装 Helm CLI (helm)。
  • 已安装 Red Hat Advanced Cluster Management(RHACM)。
  • 已配置了一个容器 registry。

流程

  1. 运行以下命令来创建 templates 目录:

    $ mkdir -p charts/acm-simple-kmod-0.0.1/templates
    Copy to Clipboard Toggle word wrap
  2. 运行以下命令来更改 templates 目录:

    $ cd charts/acm-simple-kmod-0.0.1/templates
    Copy to Clipboard Toggle word wrap
  3. BuildConfigPolicyPlacementRule 资源创建模板文件。

    1. 将镜像流和构建配置的 YAML 模板保存在 templates 目录中,存为 0001-buildconfig.yaml

      apiVersion: build.openshift.io/v1
      kind: BuildConfig
      metadata:
          labels:
              app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
          name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
          annotations:
      specialresource.openshift.io/wait: "true"
      spec:
          nodeSelector:
              node-role.kubernetes.io/worker: ""
          runPolicy: "Serial"
          triggers:
              - type: "ConfigChange"
              - type: "ImageChange"
          source:
          dockerfile: |
              FROM {{ .Values.driverToolkitImage  }} as builder
              WORKDIR /build/
              RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}}
              WORKDIR /build/simple-kmod
              RUN make all install KVER={{ .Values.kernelFullVersion }}
              FROM registry.redhat.io/ubi8/ubi-minimal
              RUN microdnf -y install kmod
              COPY --from=builder /etc/driver-toolkit-release.json /etc/
              COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/
          strategy:
              dockerStrategy:
                  dockerfilePath: Dockerfile.SRO
                  buildArgs:
                      - name: "IMAGE"
                        value: {{ .Values.driverToolkitImage  }}
                      {{- range $arg := .Values.buildArgs }}
                      - name: {{ $arg.name }}
                        value: {{ $arg.value }}
                      {{- end }}
                      - name: KVER
                        value: {{ .Values.kernelFullVersion }}
          output:
              to:
                  kind: DockerImage
                  name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
      Copy to Clipboard Toggle word wrap
    2. 将 ACM 策略的 YAML 模板保存到 templates 目录中,存为 0002-policy.yaml

      apiVersion: policy.open-cluster-management.io/v1
      kind: Policy
      metadata:
          name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
          annotations:
              policy.open-cluster-management.io/categories: CM Configuration Management
              policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
              policy.open-cluster-management.io/standards: NIST-CSF
      spec:
          remediationAction: enforce
          disabled: false
          policy-templates:
              - objectDefinition:
                  apiVersion: policy.open-cluster-management.io/v1
                  kind: ConfigurationPolicy
                  metadata:
                      name: config-{{.Values.specialResourceModule.metadata.name}}-ds
                  spec:
                      remediationAction: enforce
                      severity: low
                      namespaceselector:
                          exclude:
                              - kube-*
                          include:
                              - '*'
                      object-templates:
                          - complianceType: musthave
                            objectDefinition:
                              apiVersion: v1
                              kind: Namespace
                              metadata:
                                  name: {{.Values.specialResourceModule.spec.namespace}}
                          - complianceType: mustonlyhave
                            objectDefinition:
                              apiVersion: v1
                              kind: ServiceAccount
                              metadata:
                                  name: {{.Values.specialResourceModule.metadata.name}}
                                  namespace: {{.Values.specialResourceModule.spec.namespace}}
                          - complianceType: mustonlyhave
                            objectDefinition:
                              apiVersion: rbac.authorization.k8s.io/v1
                              kind: Role
                              metadata:
                                  name: {{.Values.specialResourceModule.metadata.name}}
                                  namespace: {{.Values.specialResourceModule.spec.namespace}}
                              rules:
                              - apiGroups:
                                  - security.openshift.io
                                resources:
                                  - securitycontextconstraints
                                verbs:
                                  - use
                                resourceNames:
                                  - privileged
                          - complianceType: mustonlyhave
                            objectDefinition:
                              apiVersion: rbac.authorization.k8s.io/v1
                              kind: RoleBinding
                              metadata:
                                  name: {{.Values.specialResourceModule.metadata.name}}
                                  namespace: {{.Values.specialResourceModule.spec.namespace}}
                              roleRef:
                                  apiGroup: rbac.authorization.k8s.io
                                  kind: Role
                                  name: {{.Values.specialResourceModule.metadata.name}}
                              subjects:
                              - kind: ServiceAccount
                                name: {{.Values.specialResourceModule.metadata.name}}
                                namespace: {{.Values.specialResourceModule.spec.namespace}}
                          - complianceType: musthave
                            objectDefinition:
                              apiVersion: apps/v1
                              kind: DaemonSet
                              metadata:
                                  labels:
                                      app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
                                  name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
                                  namespace: {{.Values.specialResourceModule.spec.namespace}}
                              spec:
                                  updateStrategy:
                                      type: OnDelete
                                  selector:
                                      matchLabels:
                                          app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
                                  template:
                                      metadata:
                                          labels:
                                              app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
                                      spec:
                                          priorityClassName: system-node-critical
                                          serviceAccount: {{.Values.specialResourceModule.metadata.name}}
                                          serviceAccountName: {{.Values.specialResourceModule.metadata.name}}
                                          containers:
                                          - image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
                                            name: {{.Values.specialResourceModule.metadata.name}}
                                            imagePullPolicy: Always
                                            command: [sleep, infinity]
                                            lifecycle:
                                              preStop:
                                                  exec:
                                                      command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"]
                                            securityContext:
                                              privileged: true
      Copy to Clipboard Toggle word wrap
    3. 将放置规则的此 YAML 模板保存到 templates 目录中,存为 0003-policy.yaml

      apiVersion: apps.open-cluster-management.io/v1
      kind: PlacementRule
      metadata:
          name: {{.Values.specialResourceModule.metadata.name}}-placement
      spec:
          clusterConditions:
          - status: "True"
            type: ManagedClusterConditionAvailable
          clusterSelector:
            matchExpressions:
            - key: name
              operator: NotIn
              values:
              - local-cluster
      ---
      apiVersion: policy.open-cluster-management.io/v1
      kind: PlacementBinding
      metadata:
          name: {{.Values.specialResourceModule.metadata.name}}-binding
      placementRef:
          apiGroup: apps.open-cluster-management.io
          kind: PlacementRule
          name: {{.Values.specialResourceModule.metadata.name}}-placement
      subjects:
          - apiGroup: policy.open-cluster-management.io
            kind: Policy
            name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
      Copy to Clipboard Toggle word wrap
    4. 运行以下命令,进入 chart/acm-simple-kmod-0.0.1 目录:

      cd ..
      Copy to Clipboard Toggle word wrap
    5. charts/acm-simple-kmod-0.0.1 目录中将 chart 的以下 YAML 模板保存为 Chart.yaml

      apiVersion: v2
      name: acm-simple-kmod
      description: Build ACM enabled simple-kmod driver with SpecialResourceOperator
      icon: https://avatars.githubusercontent.com/u/55542927
      type: application
      version: 0.0.1
      appVersion: 1.6.4
      Copy to Clipboard Toggle word wrap
  4. chart 目录中,使用以下命令创建 chart:

    $ helm package acm-simple-kmod-0.0.1/
    Copy to Clipboard Toggle word wrap

    输出示例

    Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz
    Copy to Clipboard Toggle word wrap

  5. 创建配置映射以存储 chart 文件。

    1. 运行以下命令,为配置映射文件创建一个目录:

      $ mkdir cm
      Copy to Clipboard Toggle word wrap
    2. 运行以下命令,将 Helm Chart 复制到 cm 目录中:

      $ cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz
      Copy to Clipboard Toggle word wrap
    3. 运行以下命令,创建指定包含 Helm chart 的 Helm 仓库的索引文件:

      $ helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart
      Copy to Clipboard Toggle word wrap
    4. 运行以下命令,为 Helm chart 中定义的对象创建一个命名空间:

      $ oc create namespace acm-simple-kmod
      Copy to Clipboard Toggle word wrap
    5. 运行以下命令来创建配置映射对象:

      $ oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod
      Copy to Clipboard Toggle word wrap
  6. 使用以下 SpecialResourceModule 清单,使用配置映射中创建的 Helm chart 部署 simple-kmod 对象。将此 YAML 文件保存为 acm-simple-kmod.yaml

    apiVersion: sro.openshift.io/v1beta1
    kind: SpecialResourceModule
    metadata:
        name: acm-simple-kmod
    spec:
        namespace: acm-simple-kmod
        chart:
            name: acm-simple-kmod
            version: 0.0.1
            repository:
                name: acm-simple-kmod
                url: cm://acm-simple-kmod/acm-simple-kmod-chart
        set:
            kind: Values
            apiVersion: sro.openshift.io/v1beta1
            buildArgs:
                - name: "KMODVER"
                  value: "SRO"
            registry: <your_registry>  
    1
    
            git:
                ref: master
                uri: https://github.com/openshift-psap/kvc-simple-kmod.git
        watch:
                - path: "$.metadata.labels.openshiftVersion"
                  apiVersion: cluster.open-cluster-management.io/v1
                  kind: ManagedCluster
                  name: spoke1
    Copy to Clipboard Toggle word wrap
    1
    指定您配置的 registry 的 URL。
  7. 运行以下命令来创建特殊资源模块:

    $ oc apply -f charts/examples/acm-simple-kmod.yaml
    Copy to Clipboard Toggle word wrap

验证

  1. 运行以下命令,检查构建 pod 的状态:

    $ KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod
    Copy to Clipboard Toggle word wrap

    输出示例

    NAME                                                   READY   STATUS      RESTARTS   AGE
    acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build   0/1     Completed   0          42m
    Copy to Clipboard Toggle word wrap

  2. 运行以下命令检查是否创建了策略:

    $ KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod
    Copy to Clipboard Toggle word wrap

    输出示例

    NAME                                                                      AGE   REPLICAS
    placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement   40m
    
    NAME                                                                         AGE
    placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding   40m
    
    NAME                                                                 REMEDIATION ACTION   COMPLIANCE STATE   AGE
    policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds   enforce              Compliant          40m
    Copy to Clipboard Toggle word wrap

  3. 运行以下命令检查资源是否已协调:

    $ KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status'
    Copy to Clipboard Toggle word wrap

    输出示例

    {
      "versions": {
        "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4": {
          "complete": true
        }
      }
    }
    Copy to Clipboard Toggle word wrap

  4. 运行以下命令,检查资源是否在 spoke 中运行:

    $ KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod
    Copy to Clipboard Toggle word wrap

    输出示例

    AME                                                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64   3         3         3       3            3           <none>          26m
    
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78   1/1     Running   0          26m
    pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h   1/1     Running   0          26m
    pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd   1/1     Running   0          26m
    Copy to Clipboard Toggle word wrap

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat