第 5 章 使用 GitOps ZTP 手动安装单节点 OpenShift 集群


您可以使用 Red Hat Advanced Cluster Management (RHACM) 和支持的服务部署受管单节点 OpenShift 集群。

注意

如果要创建多个受管集群,请参阅使用 ZTP 部署边缘站点中描述的 SiteConfig 方法。

重要

目标裸机主机必须满足 vDU 应用程序工作负载的推荐集群配置中列出的网络、固件和硬件要求。

5.1. 手动生成 GitOps ZTP 安装和配置 CR

使用 ztp-site-generate 容器的 generator 入口点,根据 SiteConfigPolicyGenerator CR 为集群生成站点安装和配置自定义资源 (CR)。

先决条件

  • 已安装 OpenShift CLI(oc)。
  • 已以具有 cluster-admin 权限的用户身份登录到 hub 集群。

流程

  1. 运行以下命令来创建输出文件夹:

    $ mkdir -p ./out
  2. ztp-site-generate 容器镜像导出 argocd 目录:

    $ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./out

    ./out 目录包含 out/argocd/example/ 文件夹中的参考 PolicyGeneratorSiteConfig CR。

    输出示例

    out
     └── argocd
          └── example
               ├── acmpolicygenerator
               │     ├── {policy-prefix}common-ranGen.yaml
               │     ├── {policy-prefix}example-sno-site.yaml
               │     ├── {policy-prefix}group-du-sno-ranGen.yaml
               │     ├── {policy-prefix}group-du-sno-validator-ranGen.yaml
               │     ├── ...
               │     ├── kustomization.yaml
               │     └── ns.yaml
               └── siteconfig
                      ├── example-sno.yaml
                      ├── KlusterletAddonConfigOverride.yaml
                      └── kustomization.yaml

  3. 为站点安装 CR 创建输出文件夹:

    $ mkdir -p ./site-install
  4. 为您要安装的集群类型修改示例 SiteConfig CR。将 example-sno.yaml 复制到 site-1-sno.yaml,并修改 CR 以匹配您要安装的站点和裸机主机的详情,例如:

    # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno
    ---
    apiVersion: ran.openshift.io/v1
    kind: SiteConfig
    metadata:
      name: "example-sno"
      namespace: "example-sno"
    spec:
      baseDomain: "example.com"
      pullSecretRef:
        name: "assisted-deployment-pull-secret"
      clusterImageSetNameRef: "openshift-4.16"
      sshPublicKey: "ssh-rsa AAAA..."
      clusters:
        - clusterName: "example-sno"
          networkType: "OVNKubernetes"
          # installConfigOverrides is a generic way of passing install-config
          # parameters through the siteConfig.  The 'capabilities' field configures
          # the composable openshift feature.  In this 'capabilities' setting, we
          # remove all the optional set of components.
          # Notes:
          # - OperatorLifecycleManager is needed for 4.15 and later
          # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier
          # - Ingress is needed for 4.16 and later
          installConfigOverrides: |
            {
              "capabilities": {
                "baselineCapabilitySet": "None",
                "additionalEnabledCapabilities": [
                  "NodeTuning",
                  "OperatorLifecycleManager",
                  "Ingress"
                ]
              }
            }
          # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+.
          # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest.
          # extraManifestPath: sno-extra-manifest
          clusterLabels:
            # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples
            du-profile: "latest"
            # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates:
            # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true'
            common: true
            # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""'
            group-du-sno: ""
            # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"'
            # Normally this should match or contain the cluster name so it only applies to a single cluster
            sites: "example-sno"
          clusterNetwork:
            - cidr: 1001:1::/48
              hostPrefix: 64
          machineNetwork:
            - cidr: 1111:2222:3333:4444::/64
          serviceNetwork:
            - 1001:2::/112
          additionalNTPSources:
            - 1111:2222:3333:4444::2
          # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate
          # please see Workload Partitioning Feature for a complete guide.
          cpuPartitioningMode: AllNodes
          # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster:
          #crTemplates:
          #  KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml"
          nodes:
            - hostName: "example-node1.example.com"
              role: "master"
              # Optionally; This can be used to configure desired BIOS setting on a host:
              #biosConfigRef:
              #  filePath: "example-hw.profile"
              bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1"
              bmcCredentialsName:
                name: "example-node1-bmh-secret"
              bootMACAddress: "AA:BB:CC:DD:EE:11"
              # Use UEFISecureBoot to enable secure boot
              bootMode: "UEFI"
              rootDeviceHints:
                deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0"
              # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details
              ignitionConfigOverride: |
                {
                  "ignition": {
                    "version": "3.2.0"
                  },
                  "storage": {
                    "disks": [
                      {
                        "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62",
                        "partitions": [
                          {
                            "label": "var-lib-containers",
                            "sizeMiB": 0,
                            "startMiB": 250000
                          }
                        ],
                        "wipeTable": false
                      }
                    ],
                    "filesystems": [
                      {
                        "device": "/dev/disk/by-partlabel/var-lib-containers",
                        "format": "xfs",
                        "mountOptions": [
                          "defaults",
                          "prjquota"
                        ],
                        "path": "/var/lib/containers",
                        "wipeFilesystem": true
                      }
                    ]
                  },
                  "systemd": {
                    "units": [
                      {
                        "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target",
                        "enabled": true,
                        "name": "var-lib-containers.mount"
                      }
                    ]
                  }
                }
              nodeNetwork:
                interfaces:
                  - name: eno1
                    macAddress: "AA:BB:CC:DD:EE:11"
                config:
                  interfaces:
                    - name: eno1
                      type: ethernet
                      state: up
                      ipv4:
                        enabled: false
                      ipv6:
                        enabled: true
                        address:
                          # For SNO sites with static IP addresses, the node-specific,
                          # API and Ingress IPs should all be the same and configured on
                          # the interface
                          - ip: 1111:2222:3333:4444::aaaa:1
                            prefix-length: 64
                  dns-resolver:
                    config:
                      search:
                        - example.com
                      server:
                        - 1111:2222:3333:4444::2
                  routes:
                    config:
                      - destination: ::/0
                        next-hop-interface: eno1
                        next-hop-address: 1111:2222:3333:4444::1
                        table-id: 254
    注意

    ztp-site-generate 容器的 out/extra-manifest 目录中提取引用 CR 配置文件后,您可以使用 extraManifests.searchPaths 来包含包含这些文件的 git 目录的路径。这允许 GitOps ZTP 管道在集群安装过程中应用这些 CR 文件。如果您配置 searchPaths 目录,GitOps ZTP 管道不会在站点安装过程中从 ztp-site-generate 容器获取清单。

  5. 运行以下命令,通过处理修改后的 SiteConfig CR site-1-sno.yaml 来生成第 0 天安装 CR:

    $ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator install site-1-sno.yaml /output

    输出示例

    site-install
    └── site-1-sno
        ├── site-1_agentclusterinstall_example-sno.yaml
        ├── site-1-sno_baremetalhost_example-node1.example.com.yaml
        ├── site-1-sno_clusterdeployment_example-sno.yaml
        ├── site-1-sno_configmap_example-sno.yaml
        ├── site-1-sno_infraenv_example-sno.yaml
        ├── site-1-sno_klusterletaddonconfig_example-sno.yaml
        ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
        ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
        ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
        ├── site-1-sno_managedcluster_example-sno.yaml
        ├── site-1-sno_namespace_example-sno.yaml
        └── site-1-sno_nmstateconfig_example-node1.example.com.yaml

  6. 可选:使用 -E 选项处理参考 SiteConfig CR,只为特定集群类型生成 day-0 MachineConfig 安装 CR。例如,运行以下命令:

    1. MachineConfig CR 创建输出文件夹:

      $ mkdir -p ./site-machineconfig
    2. 生成 MachineConfig 安装 CR:

      $ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator install -E site-1-sno.yaml /output

      输出示例

      site-machineconfig
      └── site-1-sno
          ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
          ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
          └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml

  7. 使用上一步中的参考 PolicyGenerator CR 生成并导出 day-2 配置 CR。运行以下命令:

    1. 为 day-2 CR 创建输出文件夹:

      $ mkdir -p ./ref
    2. 生成并导出第 2 天配置 CR:

      $ podman run -it --rm -v `pwd`/out/argocd/example/acmpolicygenerator:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator config -N . /output

      该命令在 ./ref 文件夹中为单节点 OpenShift、三节点集群和标准集群生成示例组和特定于站点的 PolicyGenerator CR。

      输出示例

      ref
       └── customResource
            ├── common
            ├── example-multinode-site
            ├── example-sno
            ├── group-du-3node
            ├── group-du-3node-validator
            │    └── Multiple-validatorCRs
            ├── group-du-sno
            ├── group-du-sno-validator
            ├── group-du-standard
            └── group-du-standard-validator
                 └── Multiple-validatorCRs

  8. 使用生成的 CR 作为安装集群的 CR 的基础。您可以将安装 CR 应用到 hub 集群,如 "Installing a single managed cluster" 所述。配置 CR 可以在集群安装后应用到集群。

验证

  • 验证在部署节点后是否应用了自定义角色和标签:

    $ oc describe node example-node.example.com

输出示例

Name:   example-node.example.com
Roles:  control-plane,example-label,master,worker
Labels: beta.kubernetes.io/arch=amd64
        beta.kubernetes.io/os=linux
        custom-label/parameter1=true
        kubernetes.io/arch=amd64
        kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com
        kubernetes.io/os=linux
        node-role.kubernetes.io/control-plane=
        node-role.kubernetes.io/example-label= 1
        node-role.kubernetes.io/master=
        node-role.kubernetes.io/worker=
        node.openshift.io/os_id=rhcos

1
自定义标签应用到节点。
Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.