Chapter 5. Installing the central location
When you deploy Red Hat OpenStack platform with a distributed compute node (DCN) architecture, you must decide your storage strategy in advance. If you deploy Red Hat OpenStack Platform without Red Hat Ceph Storage at the central location, you cannot deploy any of your edge sites with Red Hat Ceph storage. Additionally, you do not have the option of adding Red Hat Ceph Storage to the central location later by redeploying.
When you deploy the central location for distributed compute node (DCN) architecture, you can deploy the cluster:
- With or without Compute nodes
- With or without Red Hat Ceph Storage
5.1. Deploying the central controllers without edge storage
You can deploy a distributed compute node cluster without Block storage at edge sites if you use the Object Storage service (swift) as a back end for the Image service (glance) at the central location. A site deployed without block storage cannot be updated later to have block storage due to the differing role and networking profiles for each architecture.
Important: The following procedure uses lvm as the backend for Cinder which is not supported for production. You must deploy a certified block storage solution as a backend for Cinder.
Deploy the central controller cluster in a similar way to a typical overcloud deployment. This cluster does not require any Compute nodes, so you can set the Compute count to 0
to override the default of 1
. The central controller has particular storage and Oslo configuration requirements. Use the following procedure to address these requirements.
Prerequisites
-
You must create
network_data.yaml
andvip_data.yaml
files specific to your environment. You can find sample files in/usr/share/openstack-tripleo-heat-templates/network-data-samples
. -
You must create an
overcloud-baremetal-deploy.yaml
file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud.
Procedure
The following procedure outlines the steps for the initial deployment of the central location.
The following steps detail the deployment commands and environment files associated with an example DCN deployment without glance multistore. These steps do not include unrelated, but necessary, aspects of configuration, such as networking.
- Log in to the undercloud as the stack user.
Source the stackrc file:
[stack@director ~]$ source /home/stack/stackrc
Generate an environment file:
sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file /home/stack/central/central-images-env.yaml
In the home directory, create directories for each stack that you plan to deploy. Move the
network_data.yaml
,vip_data.yaml
, andovercloud-baremetal-deploy.yaml
templates for the central location to/home/stack/central/
.mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1 mv network_data.yaml /home/stack/central mv vip_data.yaml /home/stack/central mv overcloud-baremetal-deploy.yaml /home/stack/central
Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud:
(undercloud)$ openstack overcloud network provision \ --output /home/stack/central/overcloud-networks-deployed.yaml \ /home/stack/central/network_data.yaml
Provision virtual IPs for the overcloud. This command takes a definition file for virtual IPs as input. You must use the output file in your command to deploy the overcloud:
(undercloud)$ openstack overcloud network vip provision \ --stack central \ --output /home/stack/central/overcloud-vip-deployed.yaml \ /home/stack/central/vip_data.yaml
Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud:
(undercloud)$ openstack overcloud node provision \ --stack central \ --network-config \ -o /home/stack/central/deployed_metal.yaml \ /home/stack/central/overcloud-baremetal-deploy.yaml
Create a file called
central/overrides.yaml
with settings similar to the following:parameter_defaults: NtpServer: - 0.pool.ntp.org - 1.pool.ntp.org GlanceBackend: swift
-
ControllerCount: 3
specifies that three nodes will be deployed. These will use swift for glance, lvm for cinder, and host the control-plane services for edge compute nodes. -
ComputeCount: 0
is an optional parameter to prevent Compute nodes from being deployed with the central Controller nodes. GlanceBackend: swift
uses Object Storage (swift) as the Image Service (glance) back end.The resulting configuration interacts with the distributed compute nodes (DCNs) in the following ways:
The Image service on the DCN creates a cached copy of the image it receives from the central Object Storage back end. The Image service uses HTTP to copy the image from Object Storage to the local disk cache.
NoteThe central Controller node must be able to connect to the distributed compute node (DCN) site. The central Controller node can use a routed layer 3 connection.
-
Configure the naming conventions for your site in the
site-name.yaml
environment file. The Nova availability zone, Cinder storage availability zone must match:cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false EOF
Deploy the central Controller node. For example, you can use a
deploy.sh
file with the following contents:openstack overcloud deploy \ --deployed-server \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -n /home/stack/central/network_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e /home/stack/central/overcloud-networks-deployed.yaml \ -e /home/stack/central/overcloud-vip-deployed.yaml \ -e /home/stack/central/deployed_metal.yaml
You must include heat templates for the configuration of networking in your openstack overcloud deploy
command. Designing for edge architecture requires spine and leaf networking. See Spine Leaf Networking for more details.
5.2. Deploying the central site with storage
To deploy the Image service with multiple stores and Ceph Storage as the back end, complete the following steps:
Prerequisites
-
You must create
network_data.yaml
andvip_data.yaml
files specific to your environment. You can find sample files in/usr/share/openstack-tripleo-heat-templates/network-data-samples
. -
You must create an
overcloud-baremetal-deploy.yaml
file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud. - You have hardware for a Ceph cluster at the central location and in each availability zone, or in each geographic location where storage services are required.
- You have hardware for three Image Service (glance) servers at a central location and in each availability zone, or in each geographic location where storage services are required. At edge locations, the Image service is deployed to the DistributedComputeHCI nodes.
Procedure
Deploy the Red Hat OpenStack Platform central location so that the Image service (glance) can be used with multiple stores.
- Log in to the undercloud as the stack user.
Source the stackrc file:
[stack@director ~]$ source /home/stack/stackrc
Generate an environment file /home/stack/central/central-images-env.yaml
sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file /home/stack/central/central-images-env.yaml
Generate roles for the central location using roles appropriate for your environment:
openstack overcloud roles generate Compute Controller CephStorage \ -o /home/stack/central/central_roles.yaml
In the home directory, create directories for each stack that you plan to deploy. Move the
network_data.yaml
,vip_data.yaml
, andovercloud-baremetal-deploy.yaml
templates for the central location to/home/stack/central/
.mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1 mv network_data.yaml /home/stack/central mv vip_data.yaml /home/stack/central mv overcloud-baremetal-deploy.yaml /home/stack/central
Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud network provision \ --output /home/stack/central/overcloud-networks-deployed.yaml \ /home/stack/central/network_data.yaml
Provision virtual IPs for the overcloud. This command takes a definition file for virtual IPs as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud network vip provision \ --stack central \ --output /home/stack/central/overcloud-vip-deployed.yaml \ /home/stack/central/vip_data.yaml
Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud node provision \ --stack central \ --network-config \ -o /home/stack/central/deployed_metal.yaml \ /home/stack/central/overcloud-baremetal-deploy.yaml
If you are deploying the central location with hyperconverged storage, you must create an
initial-ceph.conf
configuration file using the following parameters. For more information see Configuring the Red Hat Ceph Storage cluster for HCI:[osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2
Use the
deployed_metal.yaml
file as input to theopenstack overcloud ceph deploy
command. Theopenstack overcloud ceph deploy command
outputs a yaml file that describes the deployed Ceph cluster:openstack overcloud ceph deploy \ --stack central \ /home/stack/central/deployed_metal.yaml \ --config /home/stack/central/initial-ceph.conf \ 1 --output /home/stack/central/deployed_ceph.yaml \ --container-image-prepare /home/stack/containers.yaml \ --network-data /home/stack/network-data.yaml \ --cluster central \ --roles-data /home/stack/central/central_roles.yaml
- 1
- Include initial-ceph.com only when deploying hyperconverged infrastructure.
Verify a functional Ceph deployment before continuing. Use
ssh
to connect to a server running theceph-mon
service. In an HCI deployment, this is a controller node. Run the following command:cephadm shell --config /etc/ceph/central.conf \ --keyring /etc/ceph/central.client.admin.keyring
NoteYou must use the
--config
and--keyring
parameters.Configure the naming conventions for your site in the
site-name.yaml
environment file. The Nova availability zone and the Cinder storage availability zone must match:parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false CinderStorageAvailabilityZone: central GlanceBackendID: central
Configure a glance.yaml template with contents similar to the following:
parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' GlanceBackendID: central CephClusterName: central
Deploy the stack for the central location:
openstack overcloud deploy \ --deployed-server \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -r /home/stack/central/central_roles.yaml \ -n ~/network-data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e /home/stack/central/overcloud-networks-deployed.yaml \ -e /home/stack/central/overcloud-vip-deployed.yaml \ -e /home/stack/central/deployed_metal.yaml \ -e /home/stack/central/deployed_ceph.yaml \ -e ~/central/glance.yaml
After you have deployed the overcloud for the central location, data that is needed as input for additional stack deployments for edge sites is exported and placed in the
/home/stack/overcloud-deploy
directory. Ensure that thecentral-export.yaml
file is present:stat /home/stack/overcloud-deploy/central/central-export.yaml
Export Ceph specific data:
openstack overcloud export ceph \ --stack central \ --output-file /home/stack/dcn-common/central_ceph_external.yaml
5.3. Integrating external Ceph
You can deploy the central location of a distributed compute node (DCN) architecture and integrate a pre-deployed Red Hat Ceph Storage solution. When you deploy Red Hat Ceph Storage without director, director does not have information about the Red Hat Ceph storage in your environment. You cannot run the openstack overcloud export ceph command
, and must create the central_ceph_external.yaml
manually.
Prerequisites
-
You must create
network_data.yaml
andvip_data.yaml
files specific to your environment. You can find sample files in/usr/share/openstack-tripleo-heat-templates/network-data-samples
. -
You must create an
overcloud-baremetal-deploy.yaml
file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud. - Hardware for a Ceph cluster at the central location and in each availability zone, or in each geographic location where storage services are required.
The following is an example deployment of two or more stacks:
-
One stack at the central location called
central
. -
One stack at an edge site called
dcn0
. -
Additional stacks deployed similarly to
dcn0
, such asdcn1
,dcn2
, and so on.
Procedure
You can install the central location so that it is integrated with a pre-existing Red Hat Ceph Storage solution by following the process documented in Integrating with an existing Red Hat Ceph Storage cluster. There are no special requirements for integrating Red Hat Ceph Storage with the central site of a DCN deployment, however you must still complete DCN specific steps before deploying the overcloud:
- Log in to the undercloud as the stack user.
Source the stackrc file:
[stack@director ~]$ source ~/stackrc
Generate an environment file ~/central/central-images-env.yaml
sudo openstack tripleo container image prepare \ -e containers.yaml \ --output-env-file ~/central/central-images-env.yaml
In the home directory, create directories for each stack that you plan to deploy. Use this to separate templates designed for their respective sites. Move the
network_data.yaml
,vip_data.yaml
, andovercloud-baremetal-deploy.yaml
templates for the central location to/home/stack/central/
.mkdir /home/stack/central mkdir /home/stack/dcn0 mkdir /home/stack/dcn1 mv network_data.yaml /home/stack/central mv vip_data.yaml /home/stack/central mv overcloud-baremetal-deploy.yaml /home/stack/central
Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud network provision \ --output /home/stack/central/overcloud-networks-deployed.yaml \ /home/stack/central/network_data.yaml
Provision virtual IPs for the overcloud. This command takes a definition file for virtual IPs as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud network vip provision \ --stack central \ --output /home/stack/central/overcloud-vip-deployed.yaml \ /home/stack/central/vip_data.yaml
Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud:
openstack overcloud node provision \ --stack central \ --network-config \ -o /home/stack/central/deployed_metal.yaml \ /home/stack/central/overcloud-baremetal-deploy.yaml
Configure the naming conventions for your site in the
site-name.yaml
environment file. The Compute (nova) availability zone and the Block Storage (cinder) availability zone must match:cat > /home/stack/central/site-name.yaml << EOF parameter_defaults: NovaComputeAvailabilityZone: central ControllerExtraConfig: nova::availability_zone::default_schedule_zone: central NovaCrossAZAttach: false CinderStorageAvailabilityZone: central GlanceBackendID: central EOF
Configure an
external-ceph.yaml
template with contents similar to the following:parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd GlanceBackendID: central GlanceEnabledImportMethods: web-download,copy-image GlanceStoreDescription: 'central rbd glance store' CinderRbdPoolName: "openstack-cinder" NovaRbdPoolName: "openstack-nova" GlanceRbdPoolName: "openstack-images" CinderBackupRbdPoolName: "automation-backups" GnocchiRbdPoolName: "automation-metrics" CephClusterFSID: 38dd387e-837a-437c-891c-7fc69e17a3c CephClusterName: central CephExternalMonHost: 10.9.0.1,10.9.0.2,10.9.0.3 CephClientKey: "AQAKtECeLemfiBBdQp7cjNYQRGW9y8GnhhFZg==" CephClientUserName: "openstack
Deploy the central location:
openstack overcloud deploy \ --stack central \ --templates /usr/share/openstack-tripleo-heat-templates/ \ -n /home/stack/central/network-data.yaml \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e /home/stack/central/overcloud-networks-deployed.yaml \ -e /home/stack/central/overcloud-vip-deployed.yaml \ -e /home/stack/central/deployed_metal.yaml \ -e /home/stack/central/external-ceph.yaml \ -e /home/stack/central/overcloud-networks-deployed.yaml \ -e /home/stack/central/central_roles.yaml
After you have deployed the overcloud for the central location, data that is needed as input for additional stack deployments for edge sites is exported and placed in the
/home/stack/overcloud-deploy
directory. Ensure that this control-plane-export.yaml file is present:stat ~/overcloud-deploy/control-plane/control-plane-export.yaml
Create an environment file called
central_ceph_external.yaml
with details about the Red Hat Ceph Storage deployment. This file can be passed to additional stack deployments for edge sites.parameter_defaults: CephExternalMultiConfig: - cluster: "central" fsid: "3161a3b4-e5ff-42a0-9f53-860403b29a33" external_cluster_mon_ips: "172.16.11.84, 172.16.11.87, 172.16.11.92" keys: - name: "client.openstack" caps: mgr: "allow *" mon: "profile rbd" osd: "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images" key: "AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==" mode: "0600" dashboard_enabled: false ceph_conf_overrides: client: keyring: /etc/ceph/central.client.openstack.keyring
The
fsid
parameter is the file system ID of your Ceph Storage cluster: This value is specified in the cluster configuration file in the[global]
section:[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
The
key
parameter is the ceph client key for the openstack account:[root@ceph ~]# ceph auth list ... [client.openstack] key = AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== caps mgr = "allow *" caps mon = "profile rbd" caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics" ...
For more information about the parameters shown in the sample
central_ceph_external.yaml
file, see Creating a custom environment file.
Additional resources