Chapter 15. Scaling the Ceph Storage cluster
You can scale the size of your Ceph Storage cluster by adding or removing storage nodes.
15.1. Scaling up the Ceph Storage cluster
As capacity and performance requirements change, you can scale up your Ceph Storage cluster to meet increased demands. Before doing so, ensure that you have enough nodes for the updated deployment. Then you can register and tag the new nodes in your Red Hat OpenStack Platform (RHOSP) environment.
This procedure results in the following actions:
-
The storage networks and firewall rules are configured on the new
CephStorage
nodes. -
The
ceph-admin
user is created on the newCephStorage
nodes. -
The
ceph-admin
user public SSH key is distributed to the newCephStorage
nodes so thatcephadm
can use SSH to add extra nodes. -
If a new
CephMon
orCephMgr
node is added, theceph-admin
private SSH key is also distributed to that node. -
The updated Ceph specification is applied and
cephadm
schedules the new nodes to join the Ceph cluster.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Modify the
~/overcloud-baremetal-deploy.yaml
to add the CephStorage nodes to the deployment.The following example file represents an original deployment with three CephStorage nodes.
- name: CephStorage count: 3 instances: - hostname: ceph-0 name: ceph-0 - hostname: ceph-1 name: ceph-2 - hostname: ceph-2 name: ceph-2
The following example modifies this file to add three additional nodes.
- name: CephStorage count: 6 instances: - hostname: ceph-0 name: ceph-0 - hostname: ceph-1 name: ceph-2 - hostname: ceph-2 name: ceph-2 - hostname: ceph-3 name: ceph-3 - hostname: ceph-4 name: ceph-4 - hostname: ceph-5 name: ceph-5
Use the
openstack overcloud node provision
command with the updated~/overcloud-baremetal-deploy.yaml
file.$ openstack overcloud node provision \ --stack overcloud \ --network-config \ --output ~/overcloud-baremetal-deployed.yaml \ ~/overcloud-baremetal-deploy.yaml
NoteThis command will provision the configured nodes and and output an updated copy of
~/overcloud-baremetal-deployed.yaml
. The new version updates theCephStorage
role. TheDeployedServerPortMap
andHostnameMap
also contains the new storage nodes.Use the
openstack overcloud ceph spec
command to generate a Ceph specification file.$ openstack overcloud ceph spec ~/overcloud-baremetal-deployed.yaml \ --osd-spec osd_spec.yaml \ --roles-data roles_data.yaml \ -o ceph_spec.yaml
NoteThe files used in the
openstack overcloud ceph spec
should already be available for use. They are created in the following locations:-
The
overcloud-baremetal-deployed.yaml
file was created in the previous step of this procedure. -
The
osd_spec.yaml
file was created in Configuring advanced OSD specifications. Providing the OSD specification with the--osd-spec
parameter is optional. -
The
roles_data.yaml
file was created in Designating nodes for Red Hat Ceph Storage. It is assumed the new nodes are assigned to one of the roles in this file.
The output of this command will be the
ceph_spec.yaml
file.-
The
Use the
openstack overcloud ceph user enable
command to create theceph-admin
user on all nodes in the cluster. Theceph-admin
user must be present on all nodes to enable SSH access to a node by the Ceph orchestrator.$ openstack overcloud ceph user enable ceph_spec.yaml
NoteUse the
ceph_spec.yaml
file created in the previous step.- Use Deploying the Ceph daemons using the service specification in the Red Hat Ceph Storage Operations Guide to apply the specification file to the Red Hat Ceph Storage cluster. This specification file now describes the operational state of cluster with the new nodes added.
Use the
openstack overcloud deploy
command with the updated~/overcloud-baremetal-deployed.yaml
file.$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \ -e deployed_ceph.yaml -e overcloud-baremetal-deploy.yaml
15.2. Scaling down and replacing Red Hat Ceph Storage nodes
In some cases, you might need to scale down your Red Hat Ceph Storage cluster or replace a Red Hat Ceph Storage node. In either situation, you must disable and rebalance the Red Hat Ceph Storage nodes that you want to remove from the overcloud to prevent data loss.
Do not proceed with this procedure if the Red Hat Ceph Storage cluster does not have the capacity to lose OSDs.
-
Log in to the overcloud Controller node as the
tripleo-admin
user. -
Use the
sudo cephadm shell
command to start a Ceph shell. Use the
ceph osd tree
command to identify OSDs to be removed by server.In the following example we want to identify the OSDs of
ceph-2
host.[ceph: root@oc0-controller-0 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -7 0.19519 host ceph-2 5 hdd 0.04880 osd.5 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000
Export the Ceph cluster specification to a YAML file.
[ceph: root@oc0-controller-0 /]# ceph orch ls --export > spec.yml
-
Edit the exported specification file so that the applicable hosts are removed from the
service-type: osd hosts
list and the applicable hosts have theplacement: hosts
value removed. - Save the edited file.
Apply the modified Ceph specification file.
[ceph: root@oc0-controller-0 /]# ceph orch apply -i spec.yml
ImportantIf you do not export and edit the Ceph specification file before removing the OSDs, the Ceph Manager will attempt to recreate the OSDs.
Use the command
ceph orch osd rm --zap <osd_list>
to remove the OSDs.[ceph: root@oc0-controller-0 /]# ceph orch osd rm --zap 5 7 9 11 Scheduled OSD(s) for removal [ceph: root@oc0-controller-0 /]# ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 7 ceph-2 draining 27 False False 2021-04-23 21:35:51.215361 9 ceph-2 draining 8 False False 2021-04-23 21:35:49.111500 11 ceph-2 draining 14 False False 2021-04-23 21:35:50.243762
Use the command
ceph orch osd status
to check the status of OSD removal.[ceph: root@oc0-controller-0 /]# ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 7 ceph-2 draining 34 False False 2021-04-23 21:35:51.215361 11 ceph-2 draining 14 False False 2021-04-23 21:35:50.243762
WarningDo not proceed with the next step until this command returns no results.
Use the command
ceph orch host drain <HOST>
to drain any remaining daemons.[ceph: root@oc0-controller-0 /]# ceph orch host drain ceph-2
Use the command
ceph orch host rm <HOST>
to remove the host.[ceph: root@oc0-controller-0 /]# ceph orch host rm ceph-2
NoteThis node is no longer used by the Ceph cluster but is still managed by director as a bare-metal node.
End the Ceph shell session.
NoteIf scaling down the Ceph cluster is temporary and the nodes removed will be restored later, the scaling up action can increment the
count
and setprovisioned: true
on nodes that were previously setprovisioned: false
. If the node will never reused, it can be setprovisioned: false
indefinitely and the scaling up action can specify a new instances entry.The following file sample provides some examples of each instance.
- name: Compute count: 2 instances: - hostname: overcloud-compute-0 name: node10 # Removed from deployment due to disk failure provisioned: false - hostname: overcloud-compute-1 name: node11 - hostname: overcloud-compute-2 name: node12
- To remove the node from director, see Scaling down bare-metal nodes in Installing and managing Red Hat OpenStack Platform with director.