Chapter 2. Preparing overcloud nodes
The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see the product documentation for Red Hat Ceph Storage.
2.1. Configuring the existing Red Hat Ceph Storage cluster
To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.
Procedure
- Log in to the external Ceph admin node.
Open an interactive shell to access Ceph commands:
[user@ceph ~]$ sudo cephadm shell
Create the following RADOS Block Device (RBD) pools in your Ceph Storage cluster, relevant to your environment:
Storage for OpenStack Block Storage (cinder):
$ ceph osd pool create volumes <pgnum>
Storage for OpenStack Image Storage (glance):
$ ceph osd pool create images <pgnum>
Storage for instances:
$ ceph osd pool create vms <pgnum>
Storage for OpenStack Block Storage Backup (cinder-backup):
$ ceph osd pool create backups <pgnum>
- If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Create a
client.openstack
user in your Ceph Storage cluster with the following capabilities:-
cap_mgr:
allow *
-
cap_mon:
profile rbd
cap_osd:
profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups
$ ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups'
-
cap_mgr:
Note the Ceph client key created for the
client.openstack
user:$ ceph auth list ... [client.openstack] key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==> caps mgr = "allow *" caps mon = "profile rbd" caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups" ...
-
The
key
value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.
-
The
If your overcloud deploys the Shared File Systems service with CephFS, create the
client.manila
user in your Ceph Storage cluster. The capabilities required for theclient.manila
user depend on whether your deployment exposes CephFS shares through the native CephFS protocol or the NFS protocol.If you expose CephFS shares through the native CephFS protocol, the following capabilities are required:
-
cap_mgr:
allow rw
cap_mon:
allow r
$ ceph auth add client.manila mgr 'allow rw' mon 'allow r'
-
cap_mgr:
If you expose CephFS shares through the NFS protocol, the following capabilities are required:
-
cap_mgr:
allow rw
-
cap_mon:
allow r
cap_osd:
allow rw pool=manila_data
The specified pool name must be the value set for the
ManilaCephFSDataPoolName
parameter, which defaults tomanila_data
.$ ceph auth add client.manila mgr 'allow rw' mon 'allow r' osd 'allow rw pool=manila_data'
-
cap_mgr:
Note the manila client name and the key value to use in overcloud deployment templates:
$ ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
Note the file system ID of your Ceph Storage cluster. This value is specified in the
fsid
field, under the[global]
section of the configuration file for your cluster:[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.