6.5. 创建分布式 control plane


定义 OpenStackControlPlane 自定义资源(CR)来执行以下任务:

  • 创建分布式 control plane。
  • 在 OpenShift (RHOSO)服务中启用 Red Hat OpenStack Services。

以下流程会创建一个 control plane,该服务 pod 默认分散到所有区中。存储服务覆盖默认服务 pod 放置,以调度特定区域中的 cinderVolumescinderBackupglanceAPImanilaShares 服务 pod。在添加所需的所有自定义前,您可以使用 control plane 对问题进行故障排除并测试环境。您可以在部署的环境中添加服务自定义。有关如何在部署后自定义 control plane 的更多信息,请参阅自定义 Red Hat OpenStack Services on OpenShift 部署指南。

注意
  • 以下服务示例将默认 RHOSO MetalLB IPAddressPool 范围内的 IP 地址用于 loadBalancerIPs 字段。使用您在 为 BGP 的 RHOSO 网络 VIPS 准备 RHOCP 时创建的 IP 地址,更新 loadBalancerIPs 字段。
  • 您可以使用 topologyRef 字段将每个服务的 pod 放置到特定区中。如果没有指定,pod 会在区间自动均匀分布。

先决条件

流程

  1. 在工作站上创建一个名为 distributed_control_plane.yaml 的文件,以定义 OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: distributed-control-plane
      namespace: openstack
  2. 指定在创建时的服务 pod 会在分布式区环境中的区间分布:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: distributed-control-plane
      namespace: openstack
    spec:
      topologyRef:
        name: spread-pods
  3. 指定您创建的 Secret CR,以便在提供对 OpenShift 服务上的 Red Hat OpenStack Services 的安全访问权限时提供对 RHOSO 服务 pod 的安全访问

    spec:
      ...
      secret: osp-secret
  4. 指定您为 Red Hat OpenShift Container Platform (RHOCP)集群存储后端创建的 storageClass

    spec:
      ...
      storageClass: <RHOCP_storage_class>
    • <RHOCP_storage_class > 替换为您为 RHOCP 集群存储后端创建的存储类。有关存储类的详情,请参考 创建存储类
  5. 如果使用 Red Hat Ceph Storage,请定义需要访问 Red Hat Ceph Storage secret 的区和服务:

     extraMounts:
      - extraVol:
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az1
            readOnly: true
          propagation:
          - az1
          - CinderBackup
          volumes:
          - name: ceph-az1
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az1
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az2
            readOnly: true
          propagation:
          - az2
          volumes:
          - name: ceph-az2
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az2
        - extraVolType: Ceph
          mounts:
          - mountPath: /etc/ceph
            name: ceph-az3
            readOnly: true
          propagation:
          - az3
          volumes:
          - name: ceph-az3
            projected:
              sources:
              - secret:
                  name: ceph-conf-files-az3
  6. 配置块存储服务(cinder)。

    • 如果您在 Red Hat Ceph Storage 中使用块存储服务,请添加以下配置:

        cinder:
          template:
            customServiceConfig: |
              [DEFAULT]
              storage_availability_zone = az1,az2,az3
            databaseInstance: openstack
            secret: osp-secret
            cinderAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            cinderScheduler:
              replicas: 1
            cinderBackup:
              customServiceConfig: |
                [DEFAULT]
                backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
                backup_ceph_conf = /etc/ceph/az1.conf
                backup_ceph_pool = backups
                backup_ceph_user = openstack
              networkAttachments:
              - storage
              replicas: 1
              topologyRef:
                name: zone1-node-affinity
            cinderVolumes:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az1-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az1.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = 7fd06e2a-a3e4-49fa-980c-1ed0865d2886
                  rbd_cluster_name = az1
                  backend_availability_zone = az1
                topologyRef:
                  name: zone1-node-affinity
                networkAttachments:
                - storage
                replicas: 1
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az2-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az2.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = d4596afb-43f2-4e5d-a204-bc38a3485dc3
                  rbd_cluster_name = az2
                  backend_availability_zone = az2
                topologyRef:
                  name: zone2-node-affinity
                networkAttachments:
                - storage
                replicas: 1
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = ceph
                  glance_api_servers = https://glance-az3-internal.openstack.svc:9292
                  [ceph]
                  volume_backend_name = ceph
                  volume_driver = cinder.volume.drivers.rbd.RBDDriver
                  rbd_ceph_conf = /etc/ceph/az3.conf
                  rbd_user = openstack
                  rbd_pool = volumes
                  rbd_flatten_volume_from_snapshot = False
                  rbd_secret_uuid = 1c348616-c493-4369-91f2-a55e4f404fbe
                  rbd_cluster_name = az3
                  backend_availability_zone = az3
                topologyRef:
                  name: zone3-node-affinity
                networkAttachments:
                - storage
                replicas: 1
    • 如果您将块存储服务与第三方存储搭配使用,请添加以下配置:在本例中,您可以使用 NetApp 在三个可用区(AZ)中配置 cinderVolumes

      cinder:
        apiOverride:
          route:
            haproxy.router.openshift.io/timeout: 60s
        template:
          customServiceConfig: |
            [DEFAULT]
            storage_availability_zone = az1
          cinderAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          cinderBackup:
            customServiceConfig: |
              [DEFAULT]
              # TODO:
            networkAttachments:
            - storage
            replicas: 0
            topologyRef:
              name: azone-node-affinity
          cinderScheduler:
            replicas: 1
          cinderVolumes:
            ontap-iscsi-az1:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az1-internal.openstack.svc:9292
                [ontap-az1]
                backend_availability_zone = az1
                volume_backend_name=ontap-az1
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.5
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az1
              topologyRef:
                name: azone-node-affinity
            ontap-iscsi-az2:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az2-internal.openstack.svc:9292
                [ontap-az2]
                backend_availability_zone = az2
                volume_backend_name=ontap-az2
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.6
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az2
              topologyRef:
                name: bzone-node-affinity
            ontap-iscsi-az3:
              customServiceConfig: |
                [DEFAULT]
                glance_api_servers = https://glance-az3-internal.openstack.svc:9292
                [ontap-az3]
                backend_availability_zone = az3
                volume_backend_name=ontap-az3
                volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
                netapp_server_hostname=10.0.0.7
                netapp_server_port=80
                netapp_storage_protocol=iscsi
                netapp_storage_family=ontap_cluster
                consistencygroup_support=True
              customServiceConfigSecrets:
              - cinder-volume-secrets-az3
              topologyRef:
                name: czone-node-affinity
          databaseInstance: openstack
          preserveJobs: false
          secret: osp-secret
    • 在本例中,cinderBackup 下的块存储备份服务被禁用。如果您需要块存储备份服务:

      • 增加副本数。
      • topologyRef 设置为在相关的 AZ 中运行备份服务。
      • 为适当的后端设置 customServiceConfig,如 配置持久性存储 中的 存储后端 中所述。
  7. 配置 Compute 服务(nova):

      nova:
        template:
          apiServiceTemplate:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          metadataServiceTemplate:
            replicas: 3
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          schedulerServiceTemplate:
            replicas: 3
          cellTemplates:
            cell0:
              cellDatabaseAccount: nova-cell0
              cellDatabaseInstance: openstack
              cellMessageBusInstance: rabbitmq
              hasAPIAccess: true
            cell1:
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              conductorServiceTemplate:
                replicas: 1
              noVNCProxyServiceTemplate:
                enabled: true
                networkAttachments:
                - ctlplane
              hasAPIAccess: true
          secret: osp-secret
  8. 为数据平面配置 DNS 服务:

      dns:
        template:
          options:
          - key: server
            values:
            - 192.168.122.1
          - key: server
            values:
            - 192.168.122.2
          override:
            service:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: ctlplane
                  metallb.universe.tf/allow-shared-ip: ctlplane
                  metallb.universe.tf/loadBalancerIPs: 192.168.122.80
              spec:
                type: LoadBalancer
          replicas: 2
    • template.options :使用键值对定义每个 DNS 服务器所需的 dnsmasq 实例。在本例中,定义了两个键值对,因为有两个 DNS 服务器配置为将请求转发到。
    • template.options.key :指定为部署的 dnsmasq 实例自定义的 dnsmasq 参数。设置为以下有效值之一:

      • server
      • rev-server
      • srv-host
      • txt-record
      • ptr-record
      • rebind-domain-ok
      • naptr-record
      • CNAME
      • host-record
      • caa-record
      • dns-rr
      • auth-zone
      • synth-domain
      • no-negcache
      • local
    • template.options.values: 指定 dnsmasq 参数的值。您可以将通用 DNS 服务器指定为值,如 1.1.1.1 或特定域的 DNS 服务器,例如 /google.com/8.8.8.8
  9. 配置 Identity 服务(keystone):

      keystone:
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          secret: osp-secret
          replicas: 3
  10. 配置镜像服务(glance)。

    • 如果您将镜像服务与 Red Hat Ceph Storage 搭配使用,请配置以下内容:

        glance:
          template:
            databaseInstance: openstack
            glanceAPIs:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd
                  [glance_store]
                  default_backend = az1
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.81
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone1-node-affinity
                type: edge
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az2:rbd
                  [glance_store]
                  default_backend = az2
                  [az2]
                  rbd_store_ceph_conf = /etc/ceph/az2.conf
                  store_description = "az2 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.82
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone2-node-affinity
                type: edge
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az3:rbd
                  [glance_store]
                  default_backend = az3
                  [az3]
                  rbd_store_ceph_conf = /etc/ceph/az3.conf
                  store_description = "az3 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.83
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: zone3-node-affinity
                type: edge
              default:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  enabled_backends = az1:rbd,az2:rbd,az3:rbd
                  [glance_store]
                  default_backend = az1
                  [az1]
                  rbd_store_ceph_conf = /etc/ceph/az1.conf
                  store_description = "az1 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az2]
                  rbd_store_ceph_conf = /etc/ceph/az2.conf
                  store_description = "az2 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                  [az3]
                  rbd_store_ceph_conf = /etc/ceph/az3.conf
                  store_description = "az3 RBD backend"
                  rbd_store_pool = images
                  rbd_store_user = openstack
                  rbd_thin_provisioning = True
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                      spec:
                        type: LoadBalancer
                replicas: 3
                type: split
            keystoneEndpoint: default
            storage:
              storageClass: local-storage
              storageRequest: 10G
          uniquePodNames: true
    • 如果您将镜像服务与第三方存储搭配使用,请配置以下内容:在本例中,您有三个 AZ,您使用块存储服务作为镜像服务的后端。您可以为每个 AZ 创建单独的 glanceAPI 服务器集,它使用多存储写入多个块存储服务后端。如果您将多存储配置为使用块存储服务作为其后端,当完成创建 control plane 时的步骤 25,因为您必须在创建 control plane 时为 cinder-volume 服务创建卷类型。

      本例使用块存储服务 iSCSI 来存储镜像服务镜像,它可以支持多路径,以及 cinder_use_multipathcinder_do_extend_attached 等多路径选项。如果您正在使用的存储协议或者您的环境不支持多路径,请不要使用这些选项。

            glanceAPIs:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder
                  enabled_import_methods = [web-download,copy-image,glance-direct]
                  [glance_store]
                  default_backend = az1
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.81
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: azone-node-affinity
                type: edge
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az2:cinder
                  [glance_store]
                  default_backend = az2
                  [az2]
                  store_description = AZ2 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az2
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.82
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: bzone-node-affinity
                type: edge
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az3:cinder
                  [glance_store]
                  default_backend = az3
                  [az3]
                  store_description = AZ3 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az3
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                networkAttachments:
                - storage
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.83
                      spec:
                        type: LoadBalancer
                replicas: 2
                topologyRef:
                  name: czone-node-affinity
                type: edge
              default:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_backends = az1:cinder,az2:cinder,az3:cinder
                  [glance_store]
                  default_backend = az1
                  [az1]
                  store_description = AZ1 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az1
                  [az2]
                  store_description = AZ2 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az2
                  [az3]
                  store_description = AZ3 NetApp iscsi cinder backend
                  cinder_store_auth_address = {{ .KeystoneInternalURL }}
                  cinder_store_user_name = {{ .ServiceUser }}
                  cinder_store_password = {{ .ServicePassword }}
                  cinder_store_project_name = service
                  cinder_catalog_info = volumev3::internalURL
                  cinder_use_multipath = true
                  cinder_do_extend_attached = true
                  cinder_volume_type = glance-ontap-az3
  11. 配置密钥管理服务(barbican):

      barbican:
        template:
          databaseInstance: openstack
          secret: osp-secret
          barbicanAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          barbicanWorker:
            replicas: 3
          barbicanKeystoneListener:
            replicas: 1
  12. 配置网络服务(neutron):

      neutron:
        template:
          customServiceConfig: |
            [DEFAULT]
            vlan_transparent = true
            debug = true
            [ovs]
            igmp_snooping_enable = true
          databaseInstance: openstack
          networkAttachments:
          - internalapi
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          replicas: 3
          secret: osp-secret
  13. 确保 Object Storage 服务(swift)已禁用:

      swift:
        enabled: false
  14. 配置 OVN:

      ovn:
        template:
          ovnController:
            external-ids:
              enable-chassis-as-gateway: false
            networkAttachment: tenant
            nicMappings:
              datacentre: ospbr
          ovnDBCluster:
            ovndbcluster-nb:
              dbType: NB
              networkAttachment: internalapi
              replicas: 3
              storageRequest: 10G
            ovndbcluster-sb:
              dbType: SB
              networkAttachment: internalapi
              replicas: 3
              storageRequest: 10G
          ovnNorthd:
            networkAttachment: internalapi
  15. 配置放置服务(placement):

      placement:
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          replicas: 3
          secret: osp-secret
  16. 配置 Telemetry 服务(ceilometer、prometheus):

      telemetry:
        enabled: true
        template:
          metricStorage:
            enabled: true
            dashboardsEnabled: true
            dataplaneNetwork: ctlplane
            networkAttachments:
              - ctlplane
            monitoringStack:
              alertingEnabled: true
              scrapeInterval: 30s
              storage:
                strategy: persistent
                retention: 24h
                persistent:
                  pvcStorageRequest: 20G
          autoscaling:
            enabled: false
            aodh:
              databaseAccount: aodh
              databaseInstance: openstack
              passwordSelector:
                aodhService: AodhPassword
              rabbitMqClusterName: rabbitmq
              serviceUser: aodh
              secret: osp-secret
            heatInstance: heat
          ceilometer:
            template:
              passwordSelector:
                service: CeilometerPassword
              secret: osp-secret
              serviceUser: ceilometer
          logging:
            enabled: false
    • metricStorage.dataplaneNetwork :定义用于提取 dataplane node_exporter 端点的网络。
    • metricStorage.networkAttachments :列出每个服务 pod 附加到的网络。使用 NetworkAttachmentDefinition 资源名称来指定。您可以为您指定的每个网络附加配置服务的 NIC。如果您没有配置每个服务 pod 附加到的隔离网络,则使用默认 pod 网络。您必须创建一个与指定为 dataplaneNetwork 的网络匹配的 networkAttachment,以便 Prometheus 可以从 dataplane 节点中提取数据。
    • 自动缩放 :您必须存在 autoscaling 字段,即使禁用了自动扩展。有关自动扩展的更多信息,请参阅实例自动扩展
  17. 配置共享文件系统服务(manila)。

    • 如果您将共享文件系统服务与 Red Hat Ceph Storage 搭配使用,请添加以下配置:

        manila:
          enabled: true
          template:
            manilaAPI:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_protocols=nfs,cephfs
              networkAttachments:
              - internalapi
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 3
            manilaScheduler:
              replicas: 3
            manilaShares:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az1
                  enabled_share_protocols = cephfs
                  [cephfs_az1]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az1
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az1.conf
                  cephfs_cluster_name = az1
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az1
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone1-node-affinity
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az2
                  enabled_share_protocols = cephfs
                  [cephfs_az2]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az2
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az2.conf
                  cephfs_cluster_name = az2
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az2
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone2-node-affinity
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = cephfs_az3
                  enabled_share_protocols = cephfs
                  [cephfs_az3]
                  driver_handles_share_servers = False
                  share_backend_name = cephfs_az3
                  share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
                  cephfs_conf_path = /etc/ceph/az3.conf
                  cephfs_cluster_name = az3
                  cephfs_auth_id=openstack
                  cephfs_volume_mode = 0755
                  cephfs_protocol_helper_type = CEPHFS
                  backend_availability_zone = az3
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: zone3-node-affinity
    • 如果您将共享文件系统服务与第三方存储搭配使用,请配置以下内容:在本例中,您可以使用 NetApp 在三个 AZ 中配置共享文件系统服务。

      在以下示例中,driver_handles_share_servers 设置为 True。如果您没有这个选项,则将 driver_handles_share_servers 字段设置为 False。如需更多信息,请参阅 配置持久性存储 中的 验证分布式 control plane配置 共享文件系统服务(manila)

        manila:
          apiOverride:
            route:
              haproxy.router.openshift.io/timeout: 60s
          enabled: true
          template:
            manilaAPI:
              customServiceConfig: |
                [DEFAULT]
                storage_availability_zone = az1,az2,az3
                default_share_type = nfs-multiaz
                enabled_share_protocols=nfs
                debug = true
              networkAttachments:
              - internalapi
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 3
            manilaScheduler:
              replicas: 3
            manilaShares:
              az1:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az1
                  enabled_share_protocols = nfs
                  [nfs_az1]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az1
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az1
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: azone-node-affinity
              az2:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az2
                  enabled_share_protocols = nfs
                  [nfs_az2]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az2
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az2
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: bzone-node-affinity
              az3:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_backends = nfs_az3
                  enabled_share_protocols = nfs
                  [nfs_az3]
                  driver_handles_share_servers = True
                  share_backend_name = nfs_az
                  backend_availability_zone = az3
                  share_driver=manila.share.drivers.netapp.common.NetAppDriver
                  netapp_storage_family=ontap_cluster
                  netapp_transport_type=http
                customServiceConfigSecrets:
                - osp-secret-manila-az3
                networkAttachments:
                - storage
                replicas: 1
                topologyRef:
                  name: czone-node-affinity
            preserveJobs: false
  18. 禁用负载均衡服务(octavia),因为在 OVN 服务中禁用 enable-chassis-as-gateway 时不支持它。如需更多信息,请参阅 OSPRH-10766

      octavia:
        enabled: false
  19. 配置编排服务(heat):

      heat:
        cnfAPIOverride:
          route: {}
        enabled: true
        template:
          databaseInstance: openstack
          heatAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          heatEngine:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 3
          secret: osp-secret
  20. 配置 Dashboard 服务(horizon):

      horizon:
        enabled: true
        template:
          replicas: 2
          secret: osp-secret
  21. 添加以下服务配置来实现高可用性(HA):

    • 用于所有 RHOSO 服务(openstack)的 MariaDB Galera 集群,以及用于 cell1 的 Compute 服务使用的 MariaDB Galera 集群(openstack- cell1 ):

        galera:
          enabled: true
          templates:
            openstack:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
            openstack-cell1:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
    • 包含三个 memcached 服务器的单个 memcached 集群:

        memcached:
          templates:
            memcached:
              replicas: 3
    • 用于所有 RHOSO 服务的 RabbitMQ 集群(rabbitmq)和 RabbitMQ 集群,供计算服务用于 cell1 (rabbitmq-cell1)使用:

        rabbitmq:
          templates:
            rabbitmq:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                  spec:
                    type: LoadBalancer
            rabbitmq-cell1:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                  spec:
                    type: LoadBalancer
      注意

      您不能在同一虚拟 IP (VIP)地址上配置多个 RabbitMQ 实例,因为所有 RabbitMQ 实例都使用相同的端口。如果您需要将多个 RabbitMQ 实例公开给同一网络,则必须使用不同的 IP 地址。

  22. 创建 control plane:

    $ oc create -f distributed_control_plane.yaml -n openstack
  23. 等待 RHOCP 创建与 OpenStackControlPlane CR 相关的资源。运行以下命令来检查状态:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    distributed-control-plane 	Unknown 	Setup started

    当状态为 "Setup complete" 时,会创建 OpenStackControlPlane 资源。

    提示

    -w 选项附加到 get 命令的末尾,以跟踪部署进度。

    注意

    创建 control plane 还会创建一个 OpenStackClient pod,您可以通过远程 shell (rsh)访问以运行 OpenStack CLI 命令。

    $ oc rsh -n openstack openstackclient
  24. 可选:通过查看 openstack 命名空间中的 pod 确认部署了 control plane:

    $ oc get pods -n openstack

    当所有 pod 都已完成或运行时,会部署 control plane。

  25. 如果您将镜像服务与第三方存储搭配使用,并且将多存储配置为使用块存储服务作为其后端,则每个 glance 存储都指定了唯一的 cinder_volume_type。您可以使用- private 选项 创建卷类型,以防止它们暴露给云用户。要允许镜像服务访问这些卷类型,您可以将每种类型的- project 设置为服务项目 ID。

    1. 打开与 OpenStackClient pod 的远程 shell 连接:

      $ oc rsh -n openstack openstackclient
    2. 识别服务项目 ID:

      $ sh-5.1$ openstack project list
      +----------------------------------+---------+
      | ID                               | Name    |
      +----------------------------------+---------+
      | 439f0ee839144b4c8640b9153a596a30 | admin   |
      | 7a8946c6ec7c4d0488c592f0306eaa35 | service |
      +----------------------------------+---------+
      $ sh-5.1$
    3. 将镜像服务的服务用户项目 ID 存储在 shell 变量中:

      $ SERVICE_PROJECT_ID=$(openstack project show service -c id -f value)
    4. 为每个 AZ 中的三个 cinder-volume 服务创建一个类型。设置其项目 ID 和可用性区域。使用 spec 键 RESKEY:availability_zones 将 AZ 传递给块存储服务,即使镜像服务仅在请求卷时传递类型:

      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az1" glance-ontap-az1
      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az2" glance-ontap-az2
      $ openstack volume type create --private --project "$SERVICE_PROJECT_ID" --property "RESKEY:availability_zones=az3" glance-ontap-az3
    5. 退出 openstackclient pod:

      $ exit
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部