Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. Deploying Red Hat Hyperconverged Infrastructure for Cloud using the command-line interface
As a technician, you can deploy and manage the Red Hat Hyperconverged Infrastructure for Cloud solution using the command-line interface.
5.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Verify that all the requirements are met.
- Installing the undercloud
5.2. Preparing the nodes before deploying the overcloud using the command-line interface Copia collegamentoCollegamento copiato negli appunti!
As a technician, before you can deploy the overcloud, the undercloud needs to understand the hardware being used in the environment.
The Red Hat OpenStack Platform director (RHOSP-d) is also known as the undercloud.
5.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Verify that all the requirements are met.
- Installing the undercloud
5.2.2. Registering and introspecting the hardware Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Platform director (RHOSP-d) runs an introspection process on each node and collects data about the node’s hardware. This introspection data is stored on the RHOSP-d node, and is used for various purposes, such as benchmarking and root disk assignments.
Prerequisites
- Complete the software installation of the RHOSP-d node.
- The MAC addresses for the network interface cards (NICs).
- IPMI User name and password
Procedure
Do the following steps on the RHOSP-d node, as the stack user:
Create the
osd-computeflavor:[stack@director ~]$ openstack flavor create --id auto --ram 2048 --disk 40 --vcpus 2 osd-compute [stack@director ~]$ openstack flavor set --property "capabilities:boot_option"="local" --property "capabilities:profile"="osd-compute" osd-computeCreate and populate a host definition file for the Ironic service to manage the nodes.
Create the
instackenv.jsonhost definition file:[stack@director ~]$ touch ~/instackenv.jsonAdd a definition block for each node between the
nodesstanza square brackets ({"nodes": []}), using this template:{ "pm_password": "IPMI_USER_PASSWORD", "name": "NODE_NAME", "pm_user": "IPMI_USER_NAME", "pm_addr": "IPMI_IP_ADDR", "pm_type": "pxe_ipmitool", "mac": [ "NIC_MAC_ADDR" ], "arch": "x86_64", "capabilities": "node:_NODE_ROLE-INSTANCE_NUM_,boot_option:local" },- Replace…
- IPMI_USER_PASSWORD with the IPMI password.
- NODE_NAME with a descriptive name of the node. This is an optional parameter.
- IPMI_USER_NAME with the IPMI user name that has access to power the node on or off.
- IPMI_IP_ADDR with the IPMI IP address.
- NIC_MAC_ADDR with the network card MAC address handling the PXE boot.
NODE_ROLE-INSTANCE_NUM with the node’s role, along with a node number. This solution uses two roles:
controlandosd-compute.Example
{ "nodes": [ { "pm_password": "AbC1234", "name": "m630_slot1", "pm_user": "ipmiadmin", "pm_addr": "10.19.143.61", "pm_type": "pxe_ipmitool", "mac": [ "c8:1f:66:65:33:41" ], "arch": "x86_64", "capabilities": "node:control-0,boot_option:local" }, { "pm_password": "AbC1234", "name": "m630_slot2", "pm_user": "ipmiadmin", "pm_addr": "10.19.143.62", "pm_type": "pxe_ipmitool", "mac": [ "c8:1f:66:65:33:42" ], "arch": "x86_64", "capabilities": "node:osd-compute-0,boot_option:local" }, ... Continue adding node definition blocks for each node in the initial deployment here. ] }NoteThe
osd-computerole is a custom role that is created in a later step. To predictably control node placement, add these nodes in order. For example:[stack@director ~]$ grep capabilities ~/instackenv.json "capabilities": "node:control-0,boot_option:local" "capabilities": "node:control-1,boot_option:local" "capabilities": "node:control-2,boot_option:local" "capabilities": "node:osd-compute-0,boot_option:local" "capabilities": "node:osd-compute-1,boot_option:local" "capabilities": "node:osd-compute-2,boot_option:local"
Import the nodes into the Ironic database:
[stack@director ~]$ openstack baremetal import ~/instackenv.jsonVerify that the
openstack baremetal importcommand populated the Ironic database with all the nodes:[stack@director ~]$ openstack baremetal node list
Assign the bare metal boot kernel and RAMdisk images to all the nodes:
[stack@director ~]$ openstack baremetal configure bootTo start the nodes, collect their hardware data and store the information in the Ironic database, execute the following:
[stack@director ~]$ openstack baremetal introspection bulk startNoteBulk introspection can take a long time to complete based on the number of nodes imported. Setting the
inspection_runbenchvalue tofalsein~/undercloud.conffile will speed up the bulk introspection process, but it will not collect thesysbenchandfiobenchmark data will not be collected, which can be useful data for the RHOSP-d.Verify that the introspection process completes without errors for all the nodes:
[stack@director ~]$ openstack baremetal introspection bulk status
Additional Resources
- For more information on assigning node identification parameters, see the Controlling Node Placement chapter of the RHOSP Advanced Overcloud Customization Guide.
5.2.3. Setting the root device Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenStack Platform director (RHOSP-d) must identify the root disk to provision the nodes. By default Ironic will image the first block device, typically this block device is /dev/sda. Follow this procedure to change the root disk device according to the disk configuration of the Compute/OSD nodes.
This procedure will use the following Compute/OSD node disk configuration as an example:
-
OSD : 12 x 1TB SAS disks presented as
/dev/[sda, sdb, …, sdl]block devices -
OSD Journal : 3 x 400GB SATA SSD disks presented as
/dev/[sdm, sdn, sdo]block devices -
Operating System : 2 x 250GB SAS disks configured in RAID1 presented as
/dev/sdpblock device
Since an OSD will use /dev/sda, Ironic will use /dev/sdp, the RAID 1 disk, as the root disk instead. During the hardware introspection process, Ironic stores the world-wide number (WWN) and size of each block device.
Prerequisites
- Complete the hardware introspection procedure.
Procedure
Run one of the following commands on the RHOSP-d node.
Configure the root disk device to use the
smallestroot device:[stack@director ~]$ openstack baremetal configure boot --root-device=smallest
or
Configure the root disk device to use the disk’s
by-pathname:[stack@director ~]$ openstack baremetal configure boot --root-device=disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0
Ironic will apply this root device directive to all nodes within Ironic’s database.
Verify the correct root disk device was set:
openstack baremetal introspection data save NODE_NAME_or_UUID | jq .- Replace…
- NODE_NAME_or_UUID with the host name or UUID of the node.
Additional Resources
- For more information on Defining the Root Disk for Nodes section in the RHOSP Director Installation and Usage Guide.
5.2.4. Verifying that Ironic’s disk cleaning is working Copia collegamentoCollegamento copiato negli appunti!
To verify if Ironic’s disk cleaning feature is working, you can toggle the node’s state, then observe if the node’s state goes into a cleaning state.
Prerequisites
- Installing the undercloud.
Procedure
Set the node’s state to manage:
openstack baremetal node manage $NODE_NAMEExample
[stack@director ~]$ openstack baremetal node manage osdcompute-0Set the node’s state to provide:
openstack baremetal node provide NODE_NAMEExample
[stack@director ~]$ openstack baremetal node provide osdcompute-0Check the node status:
openstack node list
5.2.5. Additional Resources Copia collegamentoCollegamento copiato negli appunti!
- For more information, see the RHOSP-d Installation and Usage Guide.
5.3. Configuring a container image source Copia collegamentoCollegamento copiato negli appunti!
As a technician, you can containerize the overcloud, but this first requires access to a registry with the required container images. Here you can find information on how to prepare the registry and the overcloud configuration to use container images for Red Hat OpenStack Platform.
There are several methods for configuring the overcloud to use a registry, based on the use case.
5.3.1. Registry methods Copia collegamentoCollegamento copiato negli appunti!
Red Hat Hyperconverged Infrastructure for Cloud supports the following registry types, choose one of the following methods:
- Remote Registry
-
The overcloud pulls container images directly from
registry.access.redhat.com. This method is the easiest for generating the initial configuration. However, each overcloud node pulls each image directly from the Red Hat Container Catalog, which can cause network congestion and slower deployment. In addition, all overcloud nodes require internet access to the Red Hat Container Catalog. - Local Registry
-
Create a local registry on the undercloud, synchronize the images from
registry.access.redhat.com, and the overcloud pulls the container images from the undercloud. This method allows you to store a registry internally, which can speed up the deployment and decrease network congestion. However, the undercloud only acts as a basic registry and provides limited life cycle management for container images.
5.3.2. Including additional container images for Red Hat OpenStack Platform services Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Hyperconverged Infrastructure for Cloud uses additional services besides the core Red Hat OpenStack Platform services. These additional services require additional container images, and you enable these services with their corresponding environment file. These environment files enable the composable containerized services in the overcloud and the director needs to know these services are enabled to prepare their images.
Prerequisites
- A running undercloud.
Procedure
As the
stackuser, on the undercloud node, using theopenstack overcloud container image preparecommand to include the additional services.Include the following environment file using the
-eoption:-
Ceph Storage Cluster :
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
-
Ceph Storage Cluster :
Include the following
--setoptions for Red Hat Ceph Storage:--set ceph_namespace- Defines the namespace for the Red Hat Ceph Storage container image.
--set ceph_image-
Defines the name of the Red Hat Ceph Storage container image. Use image name:
rhceph-3-rhel7. --set ceph_tag-
Defines the tag to use for the Red Hat Ceph Storage container image. When
--tag-from-labelis specified, the versioned tag is discovered starting from this tag.
Run the image prepare command:
Example
[stack@director ~]$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ --set ceph_namespace=registry.access.redhat.com/rhceph \ --set ceph_image=rhceph-3-rhel7 \ --tag-from-label {version}-{release} \ ...NoteThese options are passed in addition to any other options (
…) that need to be passed to theopenstack overcloud container image preparecommand.
5.3.3. Using the Red Hat registry as a remote registry source Copia collegamentoCollegamento copiato negli appunti!
Red Hat hosts the overcloud container images on registry.access.redhat.com. Pulling the images from a remote registry is the simplest method because the registry is already setup and all you require is the URL and namespace of the image you aim to pull.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud 10 environment.
- Access to the Internet.
Procedure
To pull the images directly from
registry.access.redhat.comin the overcloud deployment, an environment file is required to specify the image parameters. The following command automatically creates this environment file:(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yamlNoteUse the
-eoption to include any environment files for optional services.-
This creates an
overcloud_images.yamlenvironment file, which contains image locations, on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
- For more details, see the Including additional container images for Red Hat OpenStack Platform services section in the Red Hat Hyperconverged Infrastructure for Cloud Deployment Guide.
5.3.4. Using the undercloud as a local registry Copia collegamentoCollegamento copiato negli appunti!
You can configure a local registry on the undercloud to store overcloud container images. This method involves the following:
-
The director pulls each image from the
registry.access.redhat.com. The director creates the overcloud.
- During the overcloud creation, the nodes pull the relevant images from the undercloud.
Prerequisites
- A running Red Hat Hyperconverged Infrastructure for Cloud environment.
- Access to the Internet.
Procedure
Create a template to pull the images to the local registry:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=registry.access.redhat.com/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-images-file /home/stack/local_registry_images.yamlUse the
-eoption to include any environment files for optional services.NoteThis version of the
openstack overcloud container image preparecommand targets the registry on theregistry.access.redhat.comto generate an image list. It uses different values than theopenstack overcloud container image preparecommand used in a later step.
This creates a file called
local_registry_images.yamlwith the container image information. Pull the images using thelocal_registry_images.yamlfile:(undercloud) [stack@director ~]$ sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verboseNoteThe container images consume approximately 10 GB of disk space.
Find the namespace of the local images. The namespace uses the following pattern:
<REGISTRY_IP_ADDRESS>:8787/rhosp13Use the IP address of the undercloud, which you previously set with the
local_ipparameter in theundercloud.conffile. Alternatively, you can also obtain the full namespace with the following command:(undercloud) [stack@director ~]$ docker images | grep -v redhat.com | grep -o '^.*rhosp13' | sort -uCreate a template for using the images in our local registry on the undercloud. For example:
(undercloud) [stack@director ~]$ openstack overcloud container image prepare \ --namespace=192.168.24.1:8787/rhosp13 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yaml-
Use the
-eoption to include any environment files for optional services. -
If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location:
--set ceph_namespace,--set ceph_image,--set ceph_tag.
NoteThis version of the
openstack overcloud container image preparecommand targets a Red Hat Satellite server. It uses different values than theopenstack overcloud container image preparecommand used in a previous step.-
Use the
-
This creates an
overcloud_images.yamlenvironment file, which contains image locations on the undercloud. Include this file with all future upgrade and deployment operations.
Additional Resources
- See the Including additional container images for Red Hat OpenStack Platform services section in the Red Hat Hyperconverged Infrastructure for Cloud Deployment Guide for more information.
Next Steps
- Prepare the overcloud for an upgrade.
5.3.5. Additional Resources Copia collegamentoCollegamento copiato negli appunti!
- See Section 4.2 in the Red Hat OpenStack Platform Fast Forward Upgrades Guide for more information.
5.4. Isolating resources and tuning the overcloud using the command-line interface Copia collegamentoCollegamento copiato negli appunti!
Resource contention between Red Hat OpenStack Platform (RHOSP) and Red Hat Ceph Storage (RHCS) might cause a degradation of either service. Therefore, isolating system resources is important with the Red Hat Hyperconverged Infrastructure Cloud solution.
Likewise, tuning the overcloud is equally important for a more predictable performance outcome for a given workload.
To isolate resources and tune the overcloud, you will continue to refine the custom templates created previously.
5.4.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Build the overcloud foundation by defining the overcloud.
5.4.2. Reserving CPU and memory resources for hyperconverged nodes Copia collegamentoCollegamento copiato negli appunti!
By default, the Nova Compute service parameters do not take into account the colocation of Ceph OSD services on the same node. Hyperconverged nodes need to be tuned in order to maintain stability and maximize the number of possible instances. Using a plan environment file allows you to set resource constraints for the Nova Compute service on hyperconverged nodes. Plan environment files define workflows, and the Red Hat OpenStack Platform director (RHOSP-d) executes the plan file with the OpenStack Workflow (Mistral) service.
The RHOSP-d also provides a default plan environment file specifically for configuring resource constraints on hyperconverged nodes:
/usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml
Using the -p parameter invokes a plan environment file during the overcloud deployment.
This plan environment file will direct the OpenStack Workflow to:
- Retrieve hardware introspection data.
- Calculate optimal CPU and memory constraints for Compute on hyper-converged nodes based on that data.
- Autogenerate the necessary parameters to configure those constraints.
In the plan-environment-derived-params.yaml plan environment file, the hci_profile_config option defines several CPU and memory allocation workload profiles. The hci_profile parameter sets which workload profile is enabled.
Here is the default hci_profile:
Default Example
hci_profile: default
hci_profile_config:
default:
average_guest_memory_size_in_mb: 2048
average_guest_cpu_utilization_percentage: 50
many_small_vms:
average_guest_memory_size_in_mb: 1024
average_guest_cpu_utilization_percentage: 20
few_large_vms:
average_guest_memory_size_in_mb: 4096
average_guest_cpu_utilization_percentage: 80
nfv_default:
average_guest_memory_size_in_mb: 8192
average_guest_cpu_utilization_percentage: 90
In the above example, assumes that the average guest will use 2 GB of memory and 50% of their CPUs.
You can create a custom workload profile for the environment by adding a new profile to the hci_profile_config section. You can enable this custom workload profile by setting the hci_profile parameter to the profile’s name.
Custom Example
hci_profile: my_workload
hci_profile_config:
default:
average_guest_memory_size_in_mb: 2048
average_guest_cpu_utilization_percentage: 50
many_small_vms:
average_guest_memory_size_in_mb: 1024
average_guest_cpu_utilization_percentage: 20
few_large_vms:
average_guest_memory_size_in_mb: 4096
average_guest_cpu_utilization_percentage: 80
nfv_default:
average_guest_memory_size_in_mb: 8192
average_guest_cpu_utilization_percentage: 90
my_workload:
average_guest_memory_size_in_mb: 131072
average_guest_cpu_utilization_percentage: 100
The my_workload profile assumes that the average guest will use 128 GB of RAM and 100% of the CPUs allocated to the guest.
Additional Resources
- See the Red Hat OpenStack Platform Hyper-converged Infrastructure Guide for more information.
5.4.3. Reserving CPU resources for Ceph Copia collegamentoCollegamento copiato negli appunti!
With hyperconverged deployments there can be contention between the Nova compute and Ceph processes for CPU resources. By default ceph-ansible will limit each OSD to one vCPU by using the --cpu-quota option on the docker run command. The ceph_osd_docker_cpu_limit option overrides this default limit, allowing you to use more vCPUs for each Ceph OSD process, for example:
CephAnsibleExtraConfig:
ceph_osd_docker_cpu_limit: 2
Red Hat recommends setting the ceph_osd_docker_cpu_limit value to 2 as a starting point, and then adjust this value based on the hardware being used and workload being ran on this hyperconverged environment. This configuration option can be set in the ~/templates/ceph.yaml file.
Always test the workload before running it in a production environment.
Additional Resources
-
See the Setting the Red Hat Ceph Storage parameters section for more details on the
~/templates/ceph.yamlfile. - See the Recommended minimum hardware for containerized Ceph clusters secton in the Red Hat Ceph Storage Hardware Selection Guide for more information.
- See the Setting Dedicated Resources for Collocated Daemons in the Red Hat Ceph Storage Container Guide for more information.
5.4.4. Reserving memory resources for Ceph Copia collegamentoCollegamento copiato negli appunti!
With hyperconverged deployments there can be contention between the Nova compute and Ceph processes for memory resources. Deployments of the Red Hat Hyperconverged Infrastructure for Cloud solution will use ceph-ansible to automatically tune Ceph’s memory settings to reduce memory contention between collocated processes. The BlueStore object store is the recommended backend for hyperconverged deployments because of its better memory-handling features.
The ceph_osd_docker_memory_limit option is automatically set to the maximum memory size of the node as discovered by Ansible, regardless of the Ceph object store backend used, either FileStore or BlueStore.
Red Hat recommends not overriding the ceph_osd_docker_memory_limit option.
The osd_memory_target option is the preferred way to reduce memory growth by the Ceph OSD processes. The osd_memory_target option is automatically set if the is_hci option is set to true, for example:
CephAnsibleExtraConfig:
is_hci: true
These configuration options can be set in the ~/templates/ceph.yaml file.
The osd_memory_target option was introduced with the BlueStore object store feature starting with Red Hat Ceph Storage 3.2.
Additional Resources
-
See the Setting the Red Hat Ceph Storage parameters section for more details on the
~/templates/ceph.yamlfile. - See the Recommended minimum hardware for containerized Ceph clusters secton in the Red Hat Ceph Storage Hardware Selection Guide for more information.
- See the Setting Dedicated Resources for Collocated Daemons in the Red Hat Ceph Storage Container Guide for more information.
5.4.5. Tuning the backfilling and recovery operations for Ceph Copia collegamentoCollegamento copiato negli appunti!
Ceph uses a backfilling and recovery process to rebalance the storage cluster, whenever an OSD is removed. This is done to keep multiple copies of the data, according to the placement group policy. These two operations use system resources, so when a Ceph storage cluster is under load, then Ceph’s performance will drop as Ceph diverts resources to the backfill and recovery process. To maintain acceptable performance of the Ceph storage when an OSD is removed, then reduce the priority of backfill and recovery operations. The trade off for reducing the priority is that there are less data replicas for a longer period of time, and putting the data at a slightly greater risk.
The three variables to modify are:
osd_recovery_max_active- The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster.
osd_max_backfills- The maximum number of backfills allowed to or from a single OSD.
osd_recovery_op_priority- The priority set for recovery operations. It is relative to osd client op priority.
Since the osd_recovery_max_active and osd_max_backfills parameters are set to the correct values already, there is no need to add them to the ceph.yaml file. If you want to overwrite the default values of 3 and 1 respectively, then add them to the ceph.yaml file.
Additional Resources
- For more information on the OSD configurable parameters, see the Red Hat Ceph Storage Configuration Guide.
5.4.6. Additional Resources Copia collegamentoCollegamento copiato negli appunti!
- See Table 5.2 Deployment Parameters in the Red Hat OpenStack Platform 10 Director Installation and Usage Guide for more information on the overcloud parameters.
- See Customizing Virtual Machine Settings for more information.
-
See Section 5.6.4, “Running the deploy command” for details on running the
openstack overcloud deploycommand. - For mapping Ceph OSDs to a disk layout on non-homogeneous nodes, see Mapping the Disk Layout to Non-Homogeneous Ceph Storage Nodes in the Deploying an Overcloud with Containerized Red Hat Ceph guide.
5.5. Defining the overcloud using the command-line interface Copia collegamentoCollegamento copiato negli appunti!
As a technician, you can create a customizable set of TripleO Heat templates which defines the overcloud.
5.5.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- Verify that all the requirements are met.
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
The high-level steps for defining the Red Hat Hyperconverged Infrastructure for Cloud overcloud:
- Creating a Directory for Custom Templates
- Configuring the Overcloud Networks
- Creating the Controller and ComputeHCI Roles
- Configuring Red Hat Ceph Storage for the overcloud
- Configuring the Overcloud Node Profile Layouts
5.5.2. Creating a directory for the custom templates Copia collegamentoCollegamento copiato negli appunti!
The installation of the Red Hat OpenStack Platform director (RHOSP-d) creates a set of TripleO Heat templates. These TripleO Heat templates are located in the /usr/share/openstack-tripleo-heat-templates/ directory. Red Hat recommends copying these templates before customizing them.
Prerequisites
- Deploy the undercloud.
Procedure
Do the following step on the command-line interface of the RHOSP-d node.
Create new directories for the custom templates:
[stack@director ~]$ mkdir -p ~/templates/nic-configs
5.5.3. Configuring the overcloud networks Copia collegamentoCollegamento copiato negli appunti!
This procedure will customize the network configuration files for isolated networks and assigning them to the Red Hat OpenStack Platform (RHOSP) services.
Prerequisites
- Verify that all the network requirements are met.
Procedure
Do the following steps on the RHOSP director node, as the stack user.
Choose the Compute NIC configuration template applicable to the environment:
-
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml -
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-linux-bridge-vlans/compute.yaml -
/usr/share/openstack-tripleo-heat-templates/network/config/multiple-nics/compute.yaml /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/compute.yamlNoteSee the
README.mdin each template’s respective directory for details about the NIC configuration.
-
Create a new directory within the ~/templates/ directory:
[stack@director ~]$ touch ~/templates/nic-configsCopy the chosen template to the
~/templates/nic-configs/directory and rename it tocompute-hci.yaml:Example
[stack@director ~]$ cp /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/compute.yaml ~/templates/nic-configs/compute-hci.yamlAdd following definition, if it does note already exist, in the
parameters:section of the~/templates/nic-configs/compute-hci.yamlfile:StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage mgmt network traffic. type: numberMap
StorageMgmtNetworkVlanIDto a specific NIC on each node. For example, if you chose to trunk VLANs to a single NIC (single-nic-vlans/compute.yaml), then add the following entry to thenetwork_config:section of~/templates/nic-configs/compute-hci.yaml:type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}ImportantRed Hat recommends setting the
mtuto9000, when mapping a NIC toStorageMgmtNetworkVlanID. This MTU setting provides measurable performance improvement to the performance of Red Hat Ceph Storage. For more details, see Configuring Jumbo Frames in the Red Hat OpenStack Platform Advanced Overcloud Customization guide.Create a new file in the custom templates directory:
[stack@director ~]$ touch ~/templates/network.yamlOpen and edit the
network.yamlfile.Add the
resource_registrysection:resource_registry:Add the following two lines under the
resource_registry:section:OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller-nics.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute-nics.yamlThese two lines point the RHOSP services to the network configurations of the Controller/Monitor and Compute/OSD nodes respectively.
Add the
parameter_defaultssection:parameter_defaults:Add the following default parameters for the Neutron bridge mappings for the tenant network:
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-tenant' NeutronNetworkType: 'vxlan' NeutronTunnelType: 'vxlan' NeutronExternalNetworkBridge: "''"This defines the bridge mappings assigned to the logical networks and enables the tenants to use
vxlan.The two TripleO Heat templates referenced in step 2b requires parameters to define each network. Under the
parameter_defaultssection add the following lines:# Internal API used for private OpenStack Traffic InternalApiNetCidr: IP_ADDR_CIDR InternalApiAllocationPools: [{'start': 'IP_ADDR_START', 'end': 'IP_ADDR_END'}] InternalApiNetworkVlanID: VLAN_ID # Tenant Network Traffic - will be used for VXLAN over VLAN TenantNetCidr: IP_ADDR_CIDR TenantAllocationPools: [{'start': 'IP_ADDR_START', 'end': 'IP_ADDR_END'}] TenantNetworkVlanID: VLAN_ID # Public Storage Access - Nova/Glance <--> Ceph StorageNetCidr: IP_ADDR_CIDR StorageAllocationPools: [{'start': 'IP_ADDR_START', 'end': 'IP_ADDR_END'}] StorageNetworkVlanID: VLAN_ID # Private Storage Access - Ceph cluster/replication StorageMgmtNetCidr: IP_ADDR_CIDR StorageMgmtAllocationPools: [{'start': 'IP_ADDR_START', 'end': 'IP_ADDR_END'}] StorageMgmtNetworkVlanID: VLAN_ID # External Networking Access - Public API Access ExternalNetCidr: IP_ADDR_CIDR # Leave room for floating IPs in the External allocation pool (if required) ExternalAllocationPools: [{'start': 'IP_ADDR_START', 'end': 'IP_ADDR_END'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: IP_ADDRESS # Gateway router for the provisioning network (or undercloud IP) ControlPlaneDefaultRoute: IP_ADDRESS # The IP address of the EC2 metadata server, this is typically the IP of the undercloud EC2MetadataIp: IP_ADDRESS # Define the DNS servers (maximum 2) for the Overcloud nodes DnsServers: ["DNS_SERVER_IP","DNS_SERVER_IP"]- Replace…
- IP_ADDR_CIDR with the appropriate IP address and net mask (CIDR).
- IP_ADDR_START with the appropriate starting IP address.
- IP_ADDR_END with the appropriate ending IP address.
- IP_ADDRESS with the appropriate IP address.
- VLAN_ID with the appropriate VLAN identification number for the corresponding network.
DNS_SERVER_IP with the appropriate IP address for defining two DNS servers, separated by a comma (
,).See the appendix for an example
network.yamlfile.
Additional Resources
- For more information on Isolating Networks, see the Red Hat OpenStack Platform Advance Overcloud Customization Guide.
5.5.4. Creating the Controller and ComputeHCI roles Copia collegamentoCollegamento copiato negli appunti!
The overcloud has five default roles: Controller, Compute, BlockStorage, ObjectStorage, and CephStorage. These roles contains a list of services. You can mix these services to create a custom deployable role.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following step on the Red Hat OpenStack Platform director node, as the stack user.
Generate a custom
roles_data_custom.yamlfile that includes theControllerand theComputeHCI:[stack@director ~]$ openstack overcloud roles generate -o ~/custom-templates/roles_data_custom.yaml Controller ComputeHCI
Additional Resources
- See the Deploying the overcloud using the command line in the Red Hat Hyperconverged Infrastructure for Cloud Deployment Guide for more information on using these custom roles.
5.5.5. Setting the Red Hat Ceph Storage parameters Copia collegamentoCollegamento copiato negli appunti!
This procedure defines what Red Hat Ceph Storage (RHCS) OSD parameters to use.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Open for editing the
~/templates/ceph.yamlfile.To use the BlueStore object store backend, update the following lines under the
CephAnsibleExtraConfigsection:Example
CephAnsibleExtraConfig: osd_scenario: lvm osd_objectstore: bluestoreUpdate the following options under the
parameter_defaultssection:Example
parameter_defaults: CephPoolDefaultSize: 3 CephPoolDefaultPgNum: NUM CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd - /dev/nvme0n1 - /dev/sde - /dev/sdf - /dev/sdg - /dev/nvme1n1 CephAnsibleExtraConfig: osd_scenario: lvm osd_objectstore: bluestore ceph_osd_docker_cpu_limit: 2 is_hci: true CephConfigOverrides: osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1- Replace…
NUM with the calculated values from the Ceph PG calculator.
For this example, the following Compute/OSD node disk configuration is being used:
-
OSD : 12 x 1TB SAS disks presented as
/dev/[sda, sdb, …, sdg]block devices -
OSD WAL and DB devices : 2 x 400GB NVMe SSD disks presented as
/dev/[nvme0n1, nvme1n1]block devices
-
OSD : 12 x 1TB SAS disks presented as
Additional Resources
- For more details on tuning Ceph OSD parameters, see the Red Hat Ceph Storage Storage Strategies Guide.
- For more details on using the BlueStore object store, see the Red Hat Ceph Storage Administration Guide.
- For examples of the LVM scenario, see the LVM simple and LVM advance sections in the Red Hat Ceph Storage Installation Guide.
5.5.6. Configuring the overcloud nodes layout Copia collegamentoCollegamento copiato negli appunti!
The overcloud layout for the nodes defines, how many of these nodes to deploy based on the type, which pool of IP addresses to assign, and other parameters.
Prerequisites
- Deploy the Red Hat OpenStack Platform director, also known as the undercloud.
- Create a Directory for Custom Templates.
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Create the
layout.yamlfile in the custom templates directory:[stack@director ~]$ touch ~/templates/layout.yamlOpen the
layout.yamlfile for editing.Add the resource registry section by adding the following line:
resource_registry:Add the following lines under the
resource_registrysection for configuring theControllerandComputeHCIroles to use a pool of IP addresses:OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yamlAdd a new section for the parameter defaults called
parameter_defaultsand include the following parameters underneath this section:parameter_defaults: NtpServer: NTP_IP_ADDR ControllerHostnameFormat: 'controller-%index%' ComputeHCIHostnameFormat: 'compute-hci-%index%' ControllerCount: 3 ComputeHCICount: 3 OvercloudComputeFlavor: compute OvercloudComputeHCIFlavor: osd-compute- Replace…
NTP_IP_ADDR with the IP address of the NTP source. Time synchronization is very important!
Example
parameter_defaults: NtpServer: 10.5.26.10 ControllerHostnameFormat: 'controller-%index%' ComputeHCIHostnameFormat: 'compute-hci-%index%' ControllerCount: 3 ComputeHCICount: 3 OvercloudComputeFlavor: compute OvercloudComputeHCIFlavor: osd-computeThe value of
3for theControllerCountandComputeHCICountparameters means three Controller/Monitor nodes and three Compute/OSD nodes will be deployed.
Under the
parameter_defaultssection, add a two scheduler hints, one calledControllerSchedulerHintsand the other calledComputeHCISchedulerHints. Under each scheduler hint, add the node name format for predictable node placement, as follows:ControllerSchedulerHints: 'capabilities:node': 'control-%index%' ComputeHCISchedulerHints: 'capabilities:node': 'osd-compute-%index%'Under the
parameter_defaultssection, add the required IP addresses for each node profile, for example:Example
ControllerIPs: internal_api: - 192.168.2.200 - 192.168.2.201 - 192.168.2.202 tenant: - 192.168.3.200 - 192.168.3.201 - 192.168.3.202 storage: - 172.16.1.200 - 172.16.1.201 - 172.16.1.202 storage_mgmt: - 172.16.2.200 - 172.16.2.201 - 172.16.2.202 ComputeHCIIPs: internal_api: - 192.168.2.203 - 192.168.2.204 - 192.168.2.205 tenant: - 192.168.3.203 - 192.168.3.204 - 192.168.3.205 storage: - 172.16.1.203 - 172.16.1.204 - 172.16.1.205 storage_mgmt: - 172.16.2.203 - 172.16.2.204 - 172.16.2.205From this example, node
control-0would have the following IP addresses:192.168.2.200,192.168.3.200,172.16.1.200, and172.16.2.200.
5.5.7. Additional Resources Copia collegamentoCollegamento copiato negli appunti!
- The Red Hat OpenStack Platform Advanced Overcloud Customization Guide for more information.
5.6. Deploying the overcloud using the command-line interface Copia collegamentoCollegamento copiato negli appunti!
As a technician, you can deploy the overcloud nodes so the Nova Compute and the Ceph OSD services are colocated on the same node.
5.6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
5.6.2. Verifying the available nodes for Ironic Copia collegamentoCollegamento copiato negli appunti!
Before deploying the overcloud nodes, verify that the nodes are powered off and available.
The nodes can not be in maintenance mode.
Prerequisites
-
Having the
stackuser available on the Red Hat OpenStack Platform director node.
Procedure
Run the following command to verify all nodes are powered off, and available:
[stack@director ~]$ openstack baremetal node list
5.6.3. Configuring the controller for Pacemaker fencing Copia collegamentoCollegamento copiato negli appunti!
Isolating a node in a cluster so data corruption doesn’t happen is called fencing. Fencing protects the integrity of cluster and cluster resources.
Prerequisites
- An IPMI user and password.
-
Having the
stackuser available on the Red Hat OpenStack Platform director node.
Procedure
Generate the fencing Heat environment file:
[stack@director ~]$ openstack overcloud generate fencing --ipmi-lanplus instackenv.json --output fencing.yaml-
Include the
fencing.yamlfile with theopenstack overcloud deploycommand.
Additional Resources
- For more information, see the Deploying Red Hat Enterprise Linux OpenStack Platform 7 with Red Hat OpenStack Platform director.
5.6.4. Running the deploy command Copia collegamentoCollegamento copiato negli appunti!
After all the customization and tuning, it is time to deploy the overcloud.
The deployment of the overcloud can take a long time to finish based on the sized of the deployment.
Prerequisites
- Preparing the nodes
- Configure a container image source
- Define the overcloud
- Isolating Resources and tuning
-
Having the
stackuser available on the Red Hat OpenStack Platform director node.
Procedure
Run the following command:
[stack@director ~]$ time openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ --stack overcloud \ -p /usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml -r /home/stack/templates/roles_data_custom.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /home/stack/templates/overcloud_images.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e ~/templates/network.yaml \ -e ~/templates/ceph.yaml \ -e ~/templates/layout.yaml -e /home/stack/fencing.yaml- Command Details
-
The
timecommand is used to tell you how long the deployment takes. -
The
openstack overcloud deploycommand does the actual deployment. -
Replace
$NTP_IP_ADDRwith the IP address of the NTP source. Time synchronization is very important! -
The
--templatesargument uses the default directory (/usr/share/openstack-tripleo-heat-templates/) containing the TripleO Heat templates to deploy. -
The
-pargument points to the plan environment file for HCI deployments. See the Reserving CPU and memory resources for hyperconverged nodes section for more details. -
The
-rargument points to the roles file and overrides the defaultrole_data.yamlfile. -
The
-eargument points to an explicit template file to use during the deployment. -
The
puppet-pacemaker.yamlfile configures the controller node services in a highly available pacemaker cluster. -
The
storage-environment.yamlfile configures Ceph as a storage backend, whoseparameter_defaultsare passed by the custom template,ceph.yaml. -
The
network-isolation.yamlfile configures network isolation for different services, whose parameters are passed by the custom template,network.yaml. This file will be created automatically when the deployment starts. -
The
network.yamlfile is explained in Configuring the overcloud networks section for more details. -
The
ceph.yamlfile is explained in Setting the Red Hat Ceph Storage parameters section for more details. -
The
compute.yamlfile is explained in Changing Nova reserved memory and CPU allocations section for more details. -
The
layout.yamlfile is explained in Configuring the overcloud node profile layouts section for more details. The
fencing.yamlfile is explained in Configuring the controller for Pacemaker fencing section for more details.ImportantThe order of the arguments matters. The custom template files will override the default template files.
NoteOptionally, add the
--rhel-reg,--reg-method,--reg-orgoptions, if you want to use the Red Hat OpenStack Platform director (RHOSP-d) node as a software repository for package installations.
-
The
- Wait for the overcloud deployment to finish.
Additional Resources
- See Table 5.2 Deployment Parameters in the Red Hat OpenStack Platform 13 Director Installation and Usage Guide for more information on the overcloud parameters.
5.6.5. Verifying a successful overcloud deployment Copia collegamentoCollegamento copiato negli appunti!
It is important to verify if the overcloud deployment was successful.
Prerequisites
-
Having the
stackuser available on the Red Hat OpenStack Platform director node.
Procedure
Watch the deployment process and look for failures:
[stack@director ~]$ heat resource-list -n5 overcloud | egrep -i 'fail|progress'Example output from a successful overcloud deployment:
2016-12-20 23:25:04Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Started Mistral Workflow. Execution ID: aeca4d71-56b4-4c72-a980-022623487c05 /home/stack/.ssh/known_hosts updated. Original contents retained as /home/stack/.ssh/known_hosts.old Overcloud Endpoint: http://10.19.139.46:5000/v2.0 Overcloud DeployedAfter the deployment finishes, view the IP addresses for the overcloud nodes:
[stack@director ~]$ openstack server list