3.3. 使用特殊资源 Operator
特殊资源 Operator (SRO) 用于管理驱动程序容器的构建和部署。构建和部署容器所需的对象可以在 Helm Chart 中定义。
本节中的示例使用 simple-kmod SpecialResource 对象来指向所创建的 ConfigMap 对象来存储 Helm chart。
3.3.1. 使用配置映射构建并运行 simple-kmod SpecialResource 复制链接链接已复制到粘贴板!
在这个示例中,simple-kmod 内核模块显示特殊资源 Operator(SRO)如何管理驱动程序容器。容器在存储在配置映射中的 Helm Chart 模板中定义。
先决条件
- 有一个正在运行的 OpenShift Container Platform 集群。
-
您可以将集群的 Image Registry Operator 状态设置为
Managed。 -
已安装 OpenShift CLI(
oc)。 -
以具有
cluster-admin权限的用户身份登录 OpenShift CLI。 - 已安装 Node Feature Discovery (NFD) Operator。
- 已安装 SRO。
-
已安装 Helm CLI (
helm)。
流程
要创建 simple-kmod
SpecialResource对象,请定义用于构建镜像的镜像流和构建配置,以及用于运行容器的服务帐户、角色、角色绑定和守护进程集。需要服务帐户、角色和角色绑定来运行具有特权安全上下文的守护进程集,以便加载内核模块。创建
templates目录,并更改到此目录:$ mkdir -p chart/simple-kmod-0.0.1/templates$ cd chart/simple-kmod-0.0.1/templates将镜像流和构建配置的 YAML 模板保存到
templates目录中的0000-buildconfig.yaml中:apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}1 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}2 spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}}3 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}}4 annotations: specialresource.openshift.io/wait: "true" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: "true" spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: git: ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}} uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}} type: Git strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: "IMAGE" value: {{ .Values.driverToolkitImage }} {{- range $arg := .Values.buildArgs }} - name: {{ $arg.name }} value: {{ $arg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: ImageStreamTag name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}}5 将
templates目录中的 RBAC 资源和守护进程设置的以下 YAML 模板保存为1000-driver-container.yaml:apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} subjects: - kind: ServiceAccount name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} namespace: {{.Values.specialresource.spec.namespace}} --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} annotations: specialresource.openshift.io/wait: "true" specialresource.openshift.io/state: "driver-container" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: "true" specialresource.openshift.io/from-configmap: "true" spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} template: metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} containers: - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} imagePullPolicy: Always command: ["/sbin/init"] lifecycle: preStop: exec: command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: "" feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}"进入
chart/simple-kmod-0.0.1目录:$ cd ..在
chart/simple-kmod-0.0.1目录中,将 Chart 的以下 YAML 保存为Chart.yaml:apiVersion: v2 name: simple-kmod description: Simple kmod will deploy a simple kmod driver-container icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.0.0
在
Chart目录中,使用helm package命令创建 chart:$ helm package simple-kmod-0.0.1/输出示例
Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz创建配置映射以存储 chart 文件:
为配置映射文件创建目录:
$ mkdir cm将 Helm Chart 复制到
cm目录中:$ cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz创建一个索引文件,指定包含 Helm Chart 的 Helm 仓库:
$ helm repo index cm --url=cm://simple-kmod/simple-kmod-chart为 Helm Chart 中定义的对象创建一个命名空间:
$ oc create namespace simple-kmod创建配置映射对象:
$ oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod
使用以下
SpecialResource清单,使用您在配置映射中创建的 Helm Chart 部署 simple-kmod 对象。将此 YAML 保存为simple-kmod-configmap.yaml:apiVersion: sro.openshift.io/v1beta1 kind: SpecialResource metadata: name: simple-kmod spec: #debug: true1 namespace: simple-kmod chart: name: simple-kmod version: 0.0.1 repository: name: example url: cm://simple-kmod/simple-kmod-chart2 set: kind: Values apiVersion: sro.openshift.io/v1beta1 kmodNames: ["simple-kmod", "simple-procfs-kmod"] buildArgs: - name: "KMODVER" value: "SRO" driverContainer: source: git: ref: "master" uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"在命令行中创建
SpecialResource文件:$ oc create -f simple-kmod-configmap.yaml
如果要从节点中删除 simple-kmod 内核模块,请使用 oc delete 命令删除 simple-kmod SpecialResource API 对象。删除驱动程序容器 pod 时,内核模块会被卸载。
验证
simple-kmod 资源部署在 simple-kmod 命名空间中,如对象清单中指定的。片刻之后,simple-kmod 驱动程序容器的构建 pod 开始运行。构建在几分钟后完成,然后驱动程序容器容器集开始运行。
使用
oc get pods命令显示构建 pod 的状态:$ oc get pods -n simple-kmod输出示例
NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s使用
oc logs命令以及从上述oc get pods命令获取的构建 pod 名称,以显示 simple-kmod 驱动程序容器镜像构建的日志:$ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod要验证是否载入了 simple-kmod 内核模块,请在上面的
oc get pods命令返回的一个驱动程序容器 pod 中执行lsmod命令:$ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple输出示例
simple_procfs_kmod 16384 0 simple_kmod 16384 0
sro_kind_completed_info SRO Prometheus 指标提供有关所部署不同对象的状态的信息,这可用于对 SRO CR 安装进行故障排除。SRO 还提供其他类型的指标,可用于监视环境的健康状况。
您可以在 Red Hat Advanced Cluster Management(RHACM)的 hub-and-spoke 部署中使用 Special Resource Operator(SRO)将 hub 集群连接到一个或多个受管集群。
这个示例步骤演示了如何在 hub 中构建驱动程序容器。SRO 监视 hub 集群资源来识别 OpenShift Container Platform 版本的 helm chart,用来创建它要提供给 spoke 的资源。
先决条件
- 有一个正在运行的 OpenShift Container Platform 集群。
-
已安装 OpenShift CLI(
oc)。 -
以具有
cluster-admin权限的用户身份登录 OpenShift CLI。 - 已安装 SRO。
-
已安装 Helm CLI (
helm)。 - 已安装 Red Hat Advanced Cluster Management(RHACM)。
- 已配置了一个容器 registry。
流程
运行以下命令来创建
templates目录:$ mkdir -p charts/acm-simple-kmod-0.0.1/templates运行以下命令来更改
templates目录:$ cd charts/acm-simple-kmod-0.0.1/templates为
BuildConfig、Policy和PlacementRule资源创建模板文件。将镜像流和构建配置的 YAML 模板保存在
templates目录中,存为0001-buildconfig.yaml。apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} annotations: specialresource.openshift.io/wait: "true" spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: dockerfile: | FROM {{ .Values.driverToolkitImage }} as builder WORKDIR /build/ RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}} WORKDIR /build/simple-kmod RUN make all install KVER={{ .Values.kernelFullVersion }} FROM registry.redhat.io/ubi8/ubi-minimal RUN microdnf -y install kmod COPY --from=builder /etc/driver-toolkit-release.json /etc/ COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/ strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: "IMAGE" value: {{ .Values.driverToolkitImage }} {{- range $arg := .Values.buildArgs }} - name: {{ $arg.name }} value: {{ $arg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: DockerImage name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}将 ACM 策略的 YAML 模板保存到
templates目录中,存为0002-policy.yaml。apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-{{.Values.specialResourceModule.metadata.name}}-ds annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST-CSF spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-{{.Values.specialResourceModule.metadata.name}}-ds spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialResourceModule.metadata.name}} subjects: - kind: ServiceAccount name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: musthave objectDefinition: apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} namespace: {{.Values.specialResourceModule.spec.namespace}} spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} template: metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialResourceModule.metadata.name}} serviceAccountName: {{.Values.specialResourceModule.metadata.name}} containers: - image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}} name: {{.Values.specialResourceModule.metadata.name}} imagePullPolicy: Always command: [sleep, infinity] lifecycle: preStop: exec: command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"] securityContext: privileged: true将放置规则的此 YAML 模板保存到
templates目录中,存为0003-policy.yaml。apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: {{.Values.specialResourceModule.metadata.name}}-placement spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: name operator: NotIn values: - local-cluster --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: {{.Values.specialResourceModule.metadata.name}}-binding placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: {{.Values.specialResourceModule.metadata.name}}-placement subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-{{.Values.specialResourceModule.metadata.name}}-ds运行以下命令,进入
chart/acm-simple-kmod-0.0.1目录:cd ..在
charts/acm-simple-kmod-0.0.1目录中将 chart 的以下 YAML 模板保存为Chart.yaml:apiVersion: v2 name: acm-simple-kmod description: Build ACM enabled simple-kmod driver with SpecialResourceOperator icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.6.4
在
chart目录中,使用以下命令创建 chart:$ helm package acm-simple-kmod-0.0.1/输出示例
Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz创建配置映射以存储 chart 文件。
运行以下命令,为配置映射文件创建一个目录:
$ mkdir cm运行以下命令,将 Helm Chart 复制到
cm目录中:$ cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz运行以下命令,创建指定包含 Helm chart 的 Helm 仓库的索引文件:
$ helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart运行以下命令,为 Helm chart 中定义的对象创建一个命名空间:
$ oc create namespace acm-simple-kmod运行以下命令来创建配置映射对象:
$ oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod
使用以下
SpecialResourceModule清单,使用配置映射中创建的 Helm chart 部署simple-kmod对象。将此 YAML 文件保存为acm-simple-kmod.yaml:apiVersion: sro.openshift.io/v1beta1 kind: SpecialResourceModule metadata: name: acm-simple-kmod spec: namespace: acm-simple-kmod chart: name: acm-simple-kmod version: 0.0.1 repository: name: acm-simple-kmod url: cm://acm-simple-kmod/acm-simple-kmod-chart set: kind: Values apiVersion: sro.openshift.io/v1beta1 buildArgs: - name: "KMODVER" value: "SRO" registry: <your_registry>1 git: ref: master uri: https://github.com/openshift-psap/kvc-simple-kmod.git watch: - path: "$.metadata.labels.openshiftVersion" apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster name: spoke1- 1
- 指定您配置的 registry 的 URL。
运行以下命令来创建特殊资源模块:
$ oc apply -f charts/examples/acm-simple-kmod.yaml
验证
运行以下命令,检查构建 pod 的状态:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod输出示例
NAME READY STATUS RESTARTS AGE acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build 0/1 Completed 0 42m运行以下命令检查是否创建了策略:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod输出示例
NAME AGE REPLICAS placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement 40m NAME AGE placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding 40m NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds enforce Compliant 40m运行以下命令检查资源是否已协调:
$ KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status'输出示例
{ "versions": { "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4": { "complete": true } } }运行以下命令,检查资源是否在 spoke 中运行:
$ KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod输出示例
AME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64 3 3 3 3 3 <none> 26m NAME READY STATUS RESTARTS AGE pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd 1/1 Running 0 26m