이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Deploying the Undercloud
As a technician, you can deploy an undercloud, which provides users with the ability to deploy and manage overclouds with the Red Hat OpenStack Platform Director interface.
3.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- Have a valid Red Hat Hyperconverged Infrastructure for Cloud subscription.
- Have access to Red Hat’s software repositories through Red Hat’s Content Delivery Network (CDN).
3.2. Understanding Ironic’s disk cleaning between deployments 링크 복사링크가 클립보드에 복사되었습니다!
Enabling Ironic’s disk cleaning feature will permanently delete all data from all the disks on a node before that node becomes available again for deployment.
There are two facts that you should consider before enabling Ironic’s disk cleaning feature:
- When director deploys Ceph it uses the ceph-disk command to prepare each OSD. Before ceph-disk prepares an OSD, it checks if the disk which will host the new OSD has data from an older OSD and if it does, then it will fail the disk preparation in order to not overwrite that data. It does this as a safety feature so that data is not lost.
- If a deployment attempt with director fails and is then repeated after the overcloud is deleted, then by default the data from the previous deployment will still be on the server disks. This data may cause the repeated deployment to fail because of how the ceph-disk command behaves.
If an overcloud node is accidentally deleted and disk cleaning is enabled, then the data will be removed and can only be put back into the environment by rebuilding the node with Red Hat OpenStack Platform Director.
3.3. Installing the undercloud 링크 복사링크가 클립보드에 복사되었습니다!
Several steps must be completed to install the undercloud. This procedure is installing the Red Hat OpenStack Platform director (RHOSP-d) as the undercloud. Here is a summary of the installation steps:
- Create an installation user.
- Create directories for templates and images.
- Verify/Set the RHOSP-d node name.
- Register the RHOSP-d node.
- Install the RHOSP-d software.
- Configure the RHOSP-d software.
- Obtain and import disk images for the overcloud.
- Set a DNS server on the undercloud’s subnet.
Prerequisites
- Have access to Red Hat’s software repositories through Red Hat’s Content Delivery Network (CDN).
-
Having
rootaccess to the Red Hat OpenStack Platform director (RHOSP-d) node.
Procedure
The RHOSP-d installation requires a non-root user with
sudoprivileges to do the installation.Create a user named
stack:useradd stack
[root@director ~]# useradd stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a password for
stack. When prompted, enter the new password:passwd stack
[root@director ~]# passwd stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure
sudoaccess for thestackuser:echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack
[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@director ~]# chmod 0440 /etc/sudoers.d/stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to the
stackuser:su - stack
[root@director ~]# su - stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The RHOSP-d installation will be done as the
stackuser.
Create two new directories in the
stackuser’s home directory, one namedtemplatesand the other namedimages:mkdir ~/images mkdir ~/templates
[stack@director ~]$ mkdir ~/images [stack@director ~]$ mkdir ~/templatesCopy to Clipboard Copied! Toggle word wrap Toggle overflow These directories will organize the system image files and Heat template files used to create the overcloud environment later.
The installing and configuring process requires a fully qualified domain name (FQDN), along with an entry in the
/etc/hostsfile.Verify the RHOSP-d node’s host name:
hostname -f
[stack@director ~]$ hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow If needed, set the host name:
sudo hostnamectl set-hostname FQDN_HOST_NAME sudo hostnamectl set-hostname --transient FQDN_HOST_NAME
sudo hostnamectl set-hostname FQDN_HOST_NAME sudo hostnamectl set-hostname --transient FQDN_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
FQDN_HOST_NAME with the fully qualified domain name (FQDN) of the RHOSP-d node.
Example
sudo hostnamectl set-hostname director.example.com sudo hostnamectl set-hostname --transient director.example.com
[stack@director ~]$ sudo hostnamectl set-hostname director.example.com [stack@director ~]$ sudo hostnamectl set-hostname --transient director.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Add an entry for the RHOSP-d node name to the
/etc/hostsfile. Add the following line to the/etc/hostsfile:sudo echo "127.0.0.1 FQDN_HOST_NAME SHORT_HOST_NAME localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
sudo echo "127.0.0.1 FQDN_HOST_NAME SHORT_HOST_NAME localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hostsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
- FQDN_HOST_NAME with the full qualified domain name of the RHOSP-d node.
SHORT_HOST_NAME with the short domain name of the RHOSP-d node.
Example
sudo echo "127.0.0.1 director.example.com director localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
[stack@director ~]$ sudo echo "127.0.0.1 director.example.com director localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hostsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Register the RHOSP-d node on the Red Hat Content Delivery Network (CDN), and enable the required Red Hat software repositories using the Red Hat Subscription Manager.
Register the RHOSP-d node:
sudo subscription-manager register
[stack@director ~]$ sudo subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow When prompted, enter an authorized Customer Portal user name and password.
Lookup the valid
Pool IDfor the RHOSP entitlement:sudo subscription-manager list --available --all --matches="*Hyperconverged*"
[stack@director ~]$ sudo subscription-manager list --available --all --matches="*Hyperconverged*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
Pool IDfrom the previous step, attach the RHOSP entitlement:sudo subscription-manager attach --pool=POOL_ID
[stack@director ~]$ sudo subscription-manager attach --pool=POOL_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
POOL_ID with the valid pool id from the previous step.
Example
sudo subscription-manager attach --pool=a1b2c3d4e5f6g7h8i9
[stack@director ~]$ sudo subscription-manager attach --pool=a1b2c3d4e5f6g7h8i9Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Disable the default software repositories, and enable the required software repositories:
sudo subscription-manager repos --disable=* sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-13-rpms
[stack@director ~]$ sudo subscription-manager repos --disable=* [stack@director ~]$ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-13-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If needed, update the base system software to the latest package versions, and reboot the RHOSP-d node:
sudo yum update sudo reboot
[stack@director ~]$ sudo yum update [stack@director ~]$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the node to be completely up and running before continuing to the next step.
Install all the RHOSP-d software packages:
sudo yum install python-tripleoclient ceph-ansible
[stack@director ~]$ sudo yum install python-tripleoclient ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the RHOSP-d software.
Red Hat provides a basic undercloud configuration template to use. Copy the
undercloud.conf.samplefile to thestackuser’s home directory, namedundercloud.conf:cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow The undercloud configuration template contains two sections:
[DEFAULT]and[auth]. Open theundercloud.conffile for editing. Edit theundercloud_hostnamewith the RHOSP-d node name. Uncomment the following parameters under the[DEFAULT]section in theundercloud.conffile by deleting the#before the parameter. Edit the parameter values with the appropriate values as required for this solution’s network configuration:Expand Parameter
Network
Edit Value?
Example Value
local_ipProvisioning
Yes
192.0.2.1/24network_gatewayProvisioning
Yes
192.0.2.1undercloud_public_vipProvisioning
Yes
192.0.2.2undercloud_admin_vipProvisioning
Yes
192.0.2.3local_interfaceProvisioning
Yes
eth1network_cidrProvisioning
Yes
192.0.2.0/24masquerade_networkProvisioning
Yes
192.0.2.0/24dhcp_startProvisioning
Yes
192.0.2.5dhcp_endProvisioning
Yes
192.0.2.24inspection_interfaceProvisioning
No
br-ctlplaneinspection_iprangeProvisioning
Yes
192.0.2.100,192.0.2.120inspection_extrasN/A
Yes
trueinspection_runbenchN/A
Yes
falseinspection_enable_uefiN/A
Yes
trueSave the changes after editing the
undercloud.conffile. See the Undercloud configuration parameters for detailed descriptions of these configuration parameters.NoteConsider enabling Ironic’s disk cleaning feature, if overcloud nodes are going to be repurposed again. See the Understanding Ironic disk cleaning between deployments section for more details.
Run the RHOSP-d configuration script:
openstack undercloud install
[stack@director ~]$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis script will take several minutes to complete. This script will install additional software packages and generates two files:
undercloud-passwords.conf- A list of all passwords for the director’s services.
stackrc- A set of initialization variables to help you access the director’s command line tools.
Verify that the configuration script started and enabled all of the RHOSP services:
sudo systemctl list-units openstack-*
[stack@director ~]$ sudo systemctl list-units openstack-*Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration script gives the
stackuser access to all the container management commands. Refresh thestackuser’s permissions:exec su -l stack
[stack@director ~]$ exec su -l stackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Initialize the
stackuser’s environment to use the RHOSP-d command-line tools:source ~/stackrc
[stack@director ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command-line prompt will change, which indicates that OpenStack commands will authenticate and execute against the undercloud:
Example
(undercloud) [stack@director ~]$Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The RHOSP-d requires several disk images for provisioning the overcloud nodes.
Obtain these disk images by installing
rhosp-director-imagesandrhosp-director-images-ipasoftware packages:sudo yum install rhosp-director-images rhosp-director-images-ipa
(undercloud) [stack@director ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the archive files to the
imagesdirectory in thestackuser’s home directory:cd ~/images for x in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $x ; done
(undercloud) [stack@director ~]$ cd ~/images (undercloud) [stack@director ~]$ for x in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $x ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import the disk images into the RHOSP-d:
openstack overcloud image upload --image-path /home/stack/images/
(undercloud) [stack@director ~]$ openstack overcloud image upload --image-path /home/stack/images/Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view a list of imported disk images, execute the following command:
openstack image list
(undercloud) [stack@director ~]$ openstack image listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Image Name
Image Type
Image Description
bm-deploy-kernelDeployment
Kernel file used for provisioning and deploying systems.
bm-deploy-ramdiskDeployment
RAMdisk file used for provisioning and deploying systems.
overcloud-full-vmlinuzOvercloud
Kernel file used for the base system, which is written to the node’s disk.
overcloud-full-initrdOvercloud
RAMdisk file used for the base system, which is written to the node’s disk.
overcloud-fullOvercloud
The rest of the software needed for the base system, which is written to the node’s disk.
NoteThe
openstack image listcommand will not display the introspection PXE disk images. The introspection PXE disk images are copied to the/httpboot/directory.ls -l /httpboot total 341460 -rwxr-xr-x. 1 root root 5153184 Mar 31 06:58 agent.kernel -rw-r--r--. 1 root root 344491465 Mar 31 06:59 agent.ramdisk -rw-r--r--. 1 ironic-inspector ironic-inspector 337 Mar 31 06:23 inspector.ipxe
(undercloud) [stack@director images]$ ls -l /httpboot total 341460 -rwxr-xr-x. 1 root root 5153184 Mar 31 06:58 agent.kernel -rw-r--r--. 1 root root 344491465 Mar 31 06:59 agent.ramdisk -rw-r--r--. 1 ironic-inspector ironic-inspector 337 Mar 31 06:23 inspector.ipxeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the DNS server so that it resolves the overcloud node host names.
List the subnets:
openstack subnet list
(undercloud) [stack@director ~]$ openstack subnet listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the name server using the undercloud’s
neutronsubnet:openstack subnet set --dns-nameserver DNS_NAMESERVER_IP SUBNET_NAME_or_ID
openstack subnet set --dns-nameserver DNS_NAMESERVER_IP SUBNET_NAME_or_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
- DNS_NAMESERVER_IP with the IP address of the DNS server.
SUBNET_NAME_or_ID with the
neutronsubnet name or id.Example
openstack subnet set --dns-nameserver 192.0.2.4 local-subnet
(undercloud) [stack@director ~]$ openstack subnet set --dns-nameserver 192.0.2.4 local-subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReuse the
--dns-nameserver DNS_NAMESERVER_IPoption for each name server.
Verify the DNS server by viewing the subnet details:
openstack subnet show SUBNET_NAME_or_ID
(undercloud) [stack@director ~]$ openstack subnet show SUBNET_NAME_or_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
SUBNET_NAME_or_ID with the
neutronsubnet name or id.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
-
For more information on all the undercloud configuration parameters located in the
undercloud.conffile, see the Configuring the Director section in the RHOSP Director Installation and Usage Guide.
3.4. Configuring the undercloud to clean the disks before deploying the overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Updating the undercloud configuration file to clean disks before deploying the overcloud.
Enabling this feature will destroy all data on all disks before they are provisioned in the overcloud deployment.
Prerequisites
Procedure
There are two options, an automatic or manual way to cleaning the disks before deploying the overcloud:
First option is automatically cleaning the disks by editing the
undercloud.conffile, and add the following line:clean_nodes = True
clean_nodes = TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe bare metal provisioning service runs a
wipefs --force --allcommand to accomplish the cleaning.
WarningEnabling this feature will destroy all data on all disks before they are provisioned in the overcloud deployment. Also, this will do an additional power cycle after the first introspection and before each deployment.
The second option is to keep automatic cleaning off and run the following commands for each Ceph node:
openstack baremetal node manage NODE openstack baremetal node clean NODE --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]' openstack baremetal node provide NODE[stack@director ~]$ openstack baremetal node manage NODE [stack@director ~]$ openstack baremetal node clean NODE --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]' [stack@director ~]$ openstack baremetal node provide NODECopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
- NODE with the Ceph host name.