이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 5. Deploying the HCI nodes
For DCN sites, you can deploy a hyper-converged infrastructure (HCI) stack that uses Compute and Ceph Storage on a single node. For example, the following diagram shows two DCN stacks named dcn0 and dcn1, each in their own availability zone (AZ). Each DCN stack has its own Ceph cluster and Compute services:
The procedures in Configuring the distributed compute node (DCN) environment files and Deploying HCI nodes to the distributed compute node (DCN) site describe this deployment method. These procedures demonstrate how to add a new DCN stack to your deployment and reuse the configuration from the existing heat stack to create new environment files. In the example procedures, the first heat stack deploys an overcloud within a centralized data center. Another heat stack is then created to deploy a batch of Compute nodes to a remote physical location.
5.1. Configuring the distributed compute node environment files 링크 복사링크가 클립보드에 복사되었습니다!
This procedure retrieves the metadata of your central site and then generates the configuration files that the distributed compute node (DCN) sites require:
Procedure
Export stack information from the
centralstack. You must deploy thecontrol-planestack before running this command:openstack overcloud export \ --config-download-dir /var/lib/mistral/central \ --stack central \ --output-file ~/dcn-common/control-plane-export.yaml \openstack overcloud export \ --config-download-dir /var/lib/mistral/central \ --stack central \ --output-file ~/dcn-common/control-plane-export.yaml \Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This procedure creates a new control-plane-export.yaml environment file and uses the passwords in the plan-environment.yaml from the overcloud. The control-plane-export.yaml file contains sensitive security data. You can remove the file when you no longer require it to improve security.
5.2. Deploying HCI nodes to the distributed compute node site 링크 복사링크가 클립보드에 복사되었습니다!
This procedure uses the DistributedComputeHCI role to deploy HCI nodes to an availability zone (AZ) named dcn0. This role is used specifically for distributed compute HCI nodes.
CephMon runs on the HCI nodes and cannot run on the central Controller node. Additionally, the central Controller node is deployed without Ceph.
Procedure
Review the overrides for the distributed compute node (DCN) site in
dcn0/overrides.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the proposed Ceph configuration in
dcn0/ceph.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the values for the following parameters with values that suit your environment. For more information, see the Deploying an overcloud with containerized Red Hat Ceph and Hyperconverged Infrastructure guides.
-
CephAnsibleExtraConfig -
DistributedComputeHCIParameters -
CephPoolDefaultPgNum -
CephPoolDefaultSize -
DistributedComputeHCIExtraConfig
-
Create a new file called
nova-az.yamlwith the following contents:resource_registry: OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml parameter_defaults: NovaComputeAvailabilityZone: dcn0 RootStackName: central
resource_registry: OS::TripleO::Services::NovaAZConfig: /usr/share/openstack-tripleo-heat-templates/deployment/nova/nova-az-config.yaml parameter_defaults: NovaComputeAvailabilityZone: dcn0 RootStackName: centralCopy to Clipboard Copied! Toggle word wrap Toggle overflow Provided that the overcloud can access the endpoints that are listed in the
centralrcfile created by the central deployment, this command creates an AZ calleddcn0, with the new HCI Compute nodes added to that AZ during deployment.Run the
deploy.shdeployment script fordcn0:Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the overcloud deployment finishes, see the post-deployment configuration steps and checks in Chapter 6, Post-deployment configuration.