Chapter 3. Integrating an Existing Ceph Storage Cluster with an Overcloud
This chapter describes how to create an Overcloud and integrate it with an existing Ceph Storage Cluster. For instructions on how to create both Overcloud and Ceph Storage Cluster, see Chapter 2, Creating an Overcloud with Ceph Storage Nodes instead.
The scenario described in this chapter consists of six nodes in the Overcloud:
- Three Controller nodes with high availability.
- Three Compute nodes.
The director will integrate a separate Ceph Storage Cluster with its own nodes into the Overcloud. You manage this cluster independently from the Overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools and not through the OpenStack Platform director.
All OpenStack machines are bare metal systems using IPMI for power management. These nodes do not require an operating system because the director copies a Red Hat Enterprise Linux 7 image to each node.
The director communicates to the Controller and Compute nodes through the Provisioning network during the introspection and provisioning processes. All nodes connect to this network through the native VLAN. For this example, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:
Node Name | IP Address | MAC Address | IPMI IP Address | |
---|---|---|---|---|
Director | 192.0.2.1 | aa:aa:aa:aa:aa:aa | ||
Controller 1 | DHCP defined | b1:b1:b1:b1:b1:b1 | 192.0.2.205 | |
Controller 2 | DHCP defined | b2:b2:b2:b2:b2:b2 | 192.0.2.206 | |
Controller 3 | DHCP defined | b3:b3:b3:b3:b3:b3 | 192.0.2.207 | |
Compute 1 | DHCP defined | c1:c1:c1:c1:c1:c1 | 192.0.2.208 | |
Compute 2 | DHCP defined | c2:c2:c2:c2:c2:c2 | 192.0.2.209 | |
Compute 3 | DHCP defined | c3:c3:c3:c3:c3:c3 | 192.0.2.210 |
3.1. Configuring the Existing Ceph Storage Cluster
Create the following pools in your Ceph cluster relevant to your environment:
-
volumes
: Storage for OpenStack Block Storage (cinder) -
images
: Storage for OpenStack Image Storage (glance) -
vms
: Storage for instances -
backups
: Storage for OpenStack Block Storage Backup (cinder-backup) metrics
: Storage for OpenStack Telemetry Metrics (gnocchi)Use the following commands as a guide:
[root@ceph ~]# ceph osd pool create volumes PGNUM [root@ceph ~]# ceph osd pool create images PGNUM [root@ceph ~]# ceph osd pool create vms PGNUM [root@ceph ~]# ceph osd pool create backups PGNUM [root@ceph ~]# ceph osd pool create metrics PGNUM
Replace PGNUM with the number of placement groups. We recommend approximately 100 per OSD. For example, the total number of OSDs multiplied by 100 divided by the number of replicas (
osd pool default size
). You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.
-
Create a
client.openstack
user in your Ceph cluster with the following capabilities:-
cap_mon:
allow r
cap_osd:
allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metrics
Use the following command as a guide:
[root@ceph ~]# ceph auth add client.openstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metrics'
-
cap_mon:
Next, note the Ceph client key created for the
client.openstack
user:[root@ceph ~]# ceph auth list ... client.openstack key: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=backups, allow rwx pool=metrics ...
The
key
value here (AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==) is your Ceph client key.Finally, note the file system ID of your Ceph Storage cluster. This value is specified with the
fsid
setting in the configuration file of your cluster (under the[global]
section):[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
NoteFor more information about the Ceph Storage cluster configuration file, see Configuration Reference (from the Red Hat Ceph Storage Configuration Guide).
The Ceph client key and file system ID will both be used later in Section 3.6, “Enabling Integration with the Existing Ceph Storage Cluster”.
3.2. Initializing the Stack User
Log into the director host as the stack
user and run the following command to initialize your director configuration:
$ source ~/stackrc
This sets up environment variables containing authentication details to access the director’s CLI tools.
3.3. Registering Nodes
A node definition template (instackenv.json
) is a JSON format file and contains the hardware and power management details for registering nodes. For example:
{ "nodes":[ { "mac":[ "bb:bb:bb:bb:bb:bb" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "cc:cc:cc:cc:cc:cc" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "dd:dd:dd:dd:dd:dd" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "ee:ee:ee:ee:ee:ee" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" } { "mac":[ "ff:ff:ff:ff:ff:ff" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" } { "mac":[ "gg:gg:gg:gg:gg:gg" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" } ] }
After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json
), then import it into the director. Use the following command to accomplish this:
$ openstack baremetal import --json ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack baremetal configure boot
The nodes are now registered and configured in the director.
3.4. Inspecting the Hardware of Nodes
After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:
$ openstack baremetal introspection bulk start
Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
3.5. Manually Tagging the Nodes
After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.
Retrieve a list of your nodes to identify their UUIDs:
$ ironic node-list
To manually tag a node to a specific profile, add a profile option to the properties/capabilities
parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:
$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'
The addition of the profile
option tags the nodes into each respective profiles.
As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
3.6. Enabling Integration with the Existing Ceph Storage Cluster
Create a copy of /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml
to a directory in your stack
user’s home directory:
[stack@director ~]# mkdir templates [stack@director ~]# cp /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml ~/templates/.
Edit the file and set the following parameters:
-
Set the
CephExternal
resource definition to an absolute path:
OS::TripleO::Services::CephExternal: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-external.yaml
Set the following three parameters using your Ceph Storage environment details:
-
CephClientKey
: the Ceph client key of your Ceph Storage cluster. This is the value ofkey
you retrieved earlier in Section 3.1, “Configuring the Existing Ceph Storage Cluster” (for example,AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
). -
CephExternalMonHost
: a comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster. -
CephClusterFSID
: the file system ID of your Ceph Storage cluster. This is the value offsid
in your Ceph Storage cluster configuration file, which you retrieved earlier in Section 3.1, “Configuring the Existing Ceph Storage Cluster” (for example,4b5c8c0a-ff60-454b-a1b4-9747aa737d19
).
-
If necessary, also set the name of the OpenStack pools and the client user using the following parameters and values:
-
CephClientUserName: openstack
-
NovaRbdPoolName: vms
-
CinderRbdPoolName: volumes
-
GlanceRbdPoolName: images
-
CinderBackupRbdPoolName: backups
-
GnocchiRbdPoolName: metrics
-
3.7. Backwards Compatibility with Older Versions of Red Hat Ceph Storage
If you are integrating Red Hat OpenStack Platform with an external Ceph Storage Cluster from an earlier version (that is, Red Hat Ceph Storage 1.3), you need to enable backwards compatibility. To do so, first create an environment file in /home/stack/templates/
containing the following:
parameter_defaults: ExtraConfig: ceph::conf::args: client/rbd_default_features: value: "1"
Include this file in your overcloud deployment, described in Section 3.8, “Creating the Overcloud”.
3.8. Creating the Overcloud
The creation of the Overcloud requires additional arguments for the openstack overcloud deploy
command. For example:
$ openstack overcloud deploy --templates -e /home/stack/templates/puppet-ceph-external.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 0 --control-flavor control --compute-flavor compute --neutron-network-type vxlan --ntp-server pool.ntp.org
The above command uses the following options:
-
--templates
- Creates the Overcloud from the default Heat template collection. -
-e /home/stack/templates/puppet-ceph-external.yaml
- Adds an additional environment file to the Overcloud deployment. In this case, it is the storage environment file containing the configuration for the existing Ceph Storage Cluster. -
--control-scale 3
- Scale the Controller nodes to three. -
--compute-scale 3
- Scale the Compute nodes to three. -
--ceph-storage-scale 0
- Scale the Ceph Storage nodes to zero. This ensures the director does not create any Ceph Storage nodes. -
--control-flavor control
- Use a specific flavor for the Controller nodes. -
--compute-flavor compute
- Use a specific flavor for the Compute nodes. -
--neutron-network-type vxlan
- Sets theneutron
networking type. -
--ntp-server pool.ntp.org
- Sets our NTP server.
For a full list of options, run:
$ openstack help overcloud deploy
For more information, see Setting Overcloud Parameters in the Director Installation and Usage guide.
The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack
user and run:
$ source ~/stackrc $ heat stack-list --show-nested
This configures the Overcloud to use your external Ceph Storage cluster. Note that you manage this cluster independently from the Overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools and not through the OpenStack Platform director.
3.9. Accessing the Overcloud
The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file (overcloudrc
) in your stack
user’s home directory. Run the following command to use this file:
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your Overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:
$ source ~/stackrc