2.7. SiteConfig 高级主题
SiteConfig operator 提供了额外的功能,如创建自定义模板或扩展 worker 节点,扩展适用于大多数用例的标准操作。有关 SiteConfig operator 的高级主题,请参阅以下文档:
2.7.1. 使用 SiteConfig operator 创建自定义模板
创建在默认模板集合中未提供的用户定义模板。
需要的访问权限:集群管理员
在创建自定义模板中完成以下步骤:
创建名为
my-custom-secret.yaml
的 YAML 文件,该文件在ConfigMap
中包含集群级别模板:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: ConfigMap metadata: name: my-custom-secret namespace: rhacm data: MySecret: |- apiVersion: v1 kind: Secret metadata: name: "{{ .Spec.ClusterName }}-my-custom-secret-key" namespace: "clusters" annotations: siteconfig.open-cluster-management.io/sync-wave: "1" type: Opaque data: key: <key>
apiVersion: v1 kind: ConfigMap metadata: name: my-custom-secret namespace: rhacm data: MySecret: |- apiVersion: v1 kind: Secret metadata: name: "{{ .Spec.ClusterName }}-my-custom-secret-key" namespace: "clusters" annotations: siteconfig.open-cluster-management.io/sync-wave: "1"
1 type: Opaque data: key: <key>
- 1
siteconfig.open-cluster-management.io/sync-wave
注解控制创建、更新或删除清单的顺序。
运行以下命令,在 hub 集群中应用自定义模板:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f my-custom-secret.yaml
oc apply -f my-custom-secret.yaml
在名为
clusterinstance-my-custom-secret.yaml
的ClusterInstance
自定义资源中引用您的模板:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: ... templateRefs: - name: ai-cluster-templates-v1.yaml namespace: rhacm - name: my-custom-secret.yaml namespace: rhacm ...
spec: ... templateRefs: - name: ai-cluster-templates-v1.yaml namespace: rhacm - name: my-custom-secret.yaml namespace: rhacm ...
运行以下命令来应用
ClusterInstance
自定义资源:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f clusterinstance-my-custom-secret.yaml
oc apply -f clusterinstance-my-custom-secret.yaml
2.7.2. 使用 SiteConfig operator 在单节点 OpenShift 集群中扩展
在由 SiteConfig operator 安装的受管集群中扩展。您可以通过删除 worker 节点来在集群中扩展。
需要的访问权限:集群管理员
2.7.2.1. 先决条件
- 如果使用 GitOps ZTP,请配置了 GitOps ZTP 环境。要配置环境,请参阅为 GitOps ZTP 准备 hub 集群。
- 您有默认模板。要熟悉默认模板,请参阅 默认设置模板
- 已使用 SiteConfig operator 安装集群。要使用 SiteConfig operator 安装集群,请参阅使用 SiteConfig operator 安装单节点 OpenShift 集群
2.7.2.2. 为 worker 节点添加注解
向 worker 节点添加注解以进行移除。
完成以下步骤,从受管集群注解 worker 节点:
在用于置备集群的
ClusterInstance
自定义资源中的 worker 节点条目的extraAnnotations
字段中添加注解:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: ... nodes: - hostName: "worker-node2.example.com" role: "worker" ironicInspect: "" extraAnnotations: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: "true" ...
spec: ... nodes: - hostName: "worker-node2.example.com" role: "worker" ironicInspect: "" extraAnnotations: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: "true" ...
应用更改。请参见以下选项:
- 如果您在没有 Red Hat OpenShift GitOps 的情况下使用 Red Hat Advanced Cluster Management,请在 hub 集群中运行以下命令:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f <clusterinstance>.yaml
oc apply -f <clusterinstance>.yaml
- 如果使用 GitOps ZTP,请推送到 Git 存储库并等待 Argo CD 同步更改。
在 hub 集群中运行以下命令来验证注解是否已应用到
BaremetalHost
worker 资源:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get bmh -n <clusterinstance_namespace> worker-node2.example.com -ojsonpath='{.metadata.annotations}' | jq
oc get bmh -n <clusterinstance_namespace> worker-node2.example.com -ojsonpath='{.metadata.annotations}' | jq
有关注解成功应用程序的示例输出:
{ "baremetalhost.metal3.io/detached": "assisted-service-controller", "bmac.agent-install.openshift.io/hostname": "worker-node2.example.com", "bmac.agent-install.openshift.io/remove-agent-and-node-on-delete": "true" "bmac.agent-install.openshift.io/role": "master", "inspect.metal3.io": "disabled", "siteconfig.open-cluster-management.io/sync-wave": "1", }
{
"baremetalhost.metal3.io/detached": "assisted-service-controller",
"bmac.agent-install.openshift.io/hostname": "worker-node2.example.com",
"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete": "true"
"bmac.agent-install.openshift.io/role": "master",
"inspect.metal3.io": "disabled",
"siteconfig.open-cluster-management.io/sync-wave": "1",
}
2.7.2.3. 删除 worker 节点的 BareMetalHost
资源
删除您要删除的 worker 节点的 BareMetalHost
资源。
完成以下步骤,从受管集群中删除 worker 节点:
使用以下配置更新要在现有
ClusterInstance
自定义资源中删除的节点对象:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... spec: ... nodes: - hostName: "worker-node2.example.com" ... pruneManifests: - apiVersion: metal3.io/v1alpha1 kind: BareMetalHost ...
... spec: ... nodes: - hostName: "worker-node2.example.com" ... pruneManifests: - apiVersion: metal3.io/v1alpha1 kind: BareMetalHost ...
应用更改。请参见以下选项。
- 如果您在没有 Red Hat OpenShift GitOps 的情况下使用 Red Hat Advanced Cluster Management,请在 hub 集群中运行以下命令:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f <clusterinstance>.yaml
oc apply -f <clusterinstance>.yaml
- 如果使用 GitOps ZTP,请推送到 Git 存储库并等待 Argo CD 同步更改。
在 hub 集群中运行以下命令来验证
BareMetalHost
资源是否已移除:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com deprovisioning true 44m worker-node2.example.com powering off before delete true 20h worker-node2.example.com deleting true 50m
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com deprovisioning true 44m worker-node2.example.com powering off before delete true 20h worker-node2.example.com deleting true 50m
在 hub 集群中运行以下命令来验证
Agent
资源是否已移除:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> true worker Done
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> true worker Done
在受管集群中运行以下命令来验证
Node
资源是否已移除:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS ROLES AGE VERSION worker-node2.example.com NotReady,SchedulingDisabled worker 19h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
NAME STATUS ROLES AGE VERSION worker-node2.example.com NotReady,SchedulingDisabled worker 19h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
-
成功删除 worker 节点的
BareMetalHost
对象后,从ClusterInstance
资源的spec.nodes
部分中删除关联的 worker 节点定义。
2.7.3. 使用 SiteConfig operator 扩展单节点 OpenShift 集群
扩展由 SiteConfig operator 安装的受管集群。您可以通过添加 worker 节点来扩展集群。
需要的访问权限:集群管理员
2.7.3.1. 先决条件
- 如果使用 GitOps ZTP,则已配置了 GitOps ZTP 环境。要配置环境,请参阅为 GitOps ZTP 准备 hub 集群。
- 您有默认安装模板。要熟悉默认模板,请参阅 默认设置模板。
- 已使用 SiteConfig operator 安装集群。要使用 SiteConfig operator 安装集群,请参阅使用 SiteConfig operator 安装单节点 OpenShift 集群。
2.7.3.2. 添加 worker 节点
通过更新用于置备集群的 ClusterInstance
自定义资源来添加 worker 节点。
完成以下步骤,将 worker 节点添加到受管集群:
在现有
ClusterInstance
自定义资源中定义新节点对象:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: ... nodes: - hostName: "<host_name>" role: "worker" templateRefs: - name: ai-node-templates-v1 namespace: rhacm bmcAddress: "<bmc_address>" bmcCredentialsName: name: "<bmc_credentials_name>" bootMACAddress: "<boot_mac_address>" ...
spec: ... nodes: - hostName: "<host_name>" role: "worker" templateRefs: - name: ai-node-templates-v1 namespace: rhacm bmcAddress: "<bmc_address>" bmcCredentialsName: name: "<bmc_credentials_name>" bootMACAddress: "<boot_mac_address>" ...
应用更改。请参见以下选项:
- 如果您在没有 Red Hat OpenShift GitOps 的情况下使用 Red Hat Advanced Cluster Management,请在 hub 集群中运行以下命令:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f <clusterinstance>.yaml
oc apply -f <clusterinstance>.yaml
- 如果使用 GitOps ZTP,请推送到 Git 存储库并等待 Argo CD 同步更改。
在 hub 集群中运行以下命令来验证是否已添加新
BareMetalHost
资源:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com provisioning true 44m
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com provisioning true 44m
在 hub 集群中运行以下命令来验证是否已添加新
Agent
资源:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> false worker worker-node2.example.com <managed_cluster_name> true worker Starting installation worker-node2.example.com <managed_cluster_name> true worker Installing worker-node2.example.com <managed_cluster_name> true worker Writing image to disk worker-node2.example.com <managed_cluster_name> true worker Waiting for control plane worker-node2.example.com <managed_cluster_name> true worker Rebooting worker-node2.example.com <managed_cluster_name> true worker Joined worker-node2.example.com <managed_cluster_name> true worker Done
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> false worker worker-node2.example.com <managed_cluster_name> true worker Starting installation worker-node2.example.com <managed_cluster_name> true worker Installing worker-node2.example.com <managed_cluster_name> true worker Writing image to disk worker-node2.example.com <managed_cluster_name> true worker Waiting for control plane worker-node2.example.com <managed_cluster_name> true worker Rebooting worker-node2.example.com <managed_cluster_name> true worker Joined worker-node2.example.com <managed_cluster_name> true worker Done
在受管集群中运行以下命令来验证是否已添加新
Node
资源:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
请参见以下示例输出:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS ROLES AGE VERSION worker-node2.example.com Ready worker 1h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
NAME STATUS ROLES AGE VERSION worker-node2.example.com Ready worker 1h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
2.7.4. 为断开连接的环境 mirror 镜像
您可以使用 Image Based Install Operator 作为底层 Operator,使用 SiteConfig operator 部署集群。如果在断开连接的环境中使用 Image Based Install Operator 部署集群,则必须在 ClusterInstance
自定义资源中提供您的镜像镜像作为额外清单。
需要的访问权限:集群管理员
完成以下步骤,为断开连接的环境镜像镜像:
为您的
ImageDigestMirrorSet
对象创建一个名为idms-configmap.yaml
的 YAML 文件,其中包含您的镜像 registry 位置:Copy to Clipboard Copied! Toggle word wrap Toggle overflow kind: ConfigMap apiVersion: v1 metadata: name: "idms-configmap" namespace: "example-sno" data: 99-example-idms.yaml: | apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: example-idms spec: imageDigestMirrors: - mirrors: - mirror.registry.example.com/image-repo/image source: registry.example.com/image-repo/image
kind: ConfigMap apiVersion: v1 metadata: name: "idms-configmap" namespace: "example-sno" data: 99-example-idms.yaml: | apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: example-idms spec: imageDigestMirrors: - mirrors: - mirror.registry.example.com/image-repo/image source: registry.example.com/image-repo/image
重要: 定义包含与 ClusterInstance
资源相同的命名空间中的额外清单的 ConfigMap
资源。
在 hub 集群中运行以下命令来创建资源:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f idms-configmap.yaml
oc apply -f idms-configmap.yaml
在
ClusterInstance
自定义资源中引用ImageDigestMirrorSet
对象:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-sno" namespace: "example-sno" spec: ... extraManifestsRefs: - name: idms-configmap ...
apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-sno" namespace: "example-sno" spec: ... extraManifestsRefs: - name: idms-configmap ...