このコンテンツは選択した言語では利用できません。
Chapter 11. Scaling the Ceph Cluster
11.1. Scaling Up the Ceph Cluster リンクのコピーリンクがクリップボードにコピーされました!
You can scale up the number of Ceph Storage nodes in your overcloud by re-running the deployment with the number of Ceph Storage nodes you need.
Before doing so, ensure that you have enough nodes for the updated deployment. These nodes must be registered with the director and tagged accordingly.
Registering New Ceph Storage Nodes
To register new Ceph storage nodes with the director, follow these steps:
Log into the director host as the
stackuser and initialize your director configuration:$ source ~/stackrc-
Define the hardware and power management details for the new nodes in a new node definition template; for example,
instackenv-scale.json. Import this file to the OpenStack director:
$ openstack overcloud node import ~/instackenv-scale.jsonImporting the node definition template registers each node defined there to the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack overcloud node configure
For more information about registering new nodes, see Section 3.2, “Registering Nodes”.
Manually Tagging New Nodes
After registering each node, you will need to inspect the hardware and tag the node into a specific profile. Profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.
To inspect and tag new nodes, follow these steps:
Trigger hardware introspection to retrieve the hardware attributes of each node:
$ openstack overcloud node introspect --all-manageable --provide-
The
--all-manageableoption introspects only nodes in a managed state. In this example, it is all of them. The
--provideoption resets all nodes to anactivestate after introspection.ImportantMake sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
-
The
Retrieve a list of your nodes to identify their UUIDs:
$ openstack baremetal node listAdd a profile option to the
properties/capabilitiesparameter for each node to manually tag a node to a specific profile.For example, the following commands tag three additional nodes with the
ceph-storageprofile:$ ironic node-update 551d81f5-4df2-4e0f-93da-6c5de0b868f7 add properties/capabilities='profile:ceph-storage,boot_option:local' $ ironic node-update 5e735154-bd6b-42dd-9cc2-b6195c4196d7 add properties/capabilities='profile:ceph-storage,boot_option:local' $ ironic node-update 1a2b090c-299d-4c20-a25d-57dd21a7085b add properties/capabilities='profile:ceph-storage,boot_option:local'
If the nodes you just tagged and registered use multiple disks, you can set the director to use a specific root disk on each node. See Section 3.4, “Defining the root disk” for instructions on how to do so.
Re-deploying the Overcloud with Additional Ceph Storage Nodes
After registering and tagging the new nodes, you can now scale up the number of Ceph Storage nodes by re-deploying the overcloud. When you do, set the CephStorageCount parameter in the parameter_defaults of your environment file (in this case, ~/templates/storage-config.yaml). In Section 8.1, “Assigning Nodes and Flavors to Roles”, the overcloud is configured to deploy with 3 Ceph Storage nodes. To scale it up to 6 nodes instead, use:
parameter_defaults:
ControllerCount: 3
OvercloudControlFlavor: control
ComputeCount: 3
OvercloudComputeFlavor: compute
CephStorageCount: 6
OvercloudCephStorageFlavor: ceph-storage
CephMonCount: 3
OvercloudCephMonFlavor: ceph-mon
Upon re-deployment with this setting, the overcloud should now have 6 Ceph Storage nodes instead of 3.
11.2. Scaling Down and Replacing Ceph Storage Nodes リンクのコピーリンクがクリップボードにコピーされました!
In some cases, you may need to scale down your Ceph cluster, or even replace a Ceph Storage node (for example, if a Ceph Storage node is faulty). In either situation, you need to disable and rebalance any Ceph Storage node you are removing from the Overcloud to ensure no data loss. This procedure explains the process for replacing a Ceph Storage node.
This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Administering Ceph clusters that run in Containers and Removing a Ceph OSD using the command-line interface.
-
Log in to a Controller node as the
heat-adminuser. The director’sstackuser has an SSH key to access theheat-adminuser. List the OSD tree and find the OSDs for your node. For example, the node you want to remove might contain the following OSDs:
-2 0.09998 host overcloud-cephstorage-0 0 0.04999 osd.0 up 1.00000 1.00000 1 0.04999 osd.1 up 1.00000 1.00000Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1.
[heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd out 0 [heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd out 1The Ceph Storage cluster begins rebalancing. Wait for this process to complete. Follow the status using the following command:
[heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph -wAfter the Ceph cluster completes rebalancing, log in to the Ceph Storage node you are removing (in this case,
overcloud-cephstorage-0) as theheat-adminuser and stop the node.[heat-admin@overcloud-cephstorage-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME systemctl disable ceph-osd@0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME systemctl disable ceph-osd@1Stop the OSDs.
[heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@0 [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@1While logged in to the Controller node, remove the OSDs from the CRUSH map so that they no longer receive data.
[heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd crush remove osd.0 [heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd crush remove osd.1Remove the OSD authentication key.
[heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph auth del osd.0 [heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph auth del osd.1Remove the OSD from the cluster.
[heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd rm 0 [heat-admin@overcloud-controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd rm 1Leave the node and return to the director host as the
stackuser.[heat-admin@overcloud-controller-0 ~]$ exit [stack@director ~]$Disable the Ceph Storage node so the director does not reprovision it.
[stack@director ~]$ openstack baremetal node list [stack@director ~]$ openstack baremetal node maintenance set UUIDRemoving a Ceph Storage node requires an update to the
overcloudstack in the director using the local template files. First identify the UUID of the Overcloud stack:$ openstack stack listIdentify the UUIDs of the Ceph Storage node you want to delete:
$ openstack server listRun the following command to delete the node from the stack and update the plan accordingly:
$ openstack overcloud node delete --stack overcloud NODE_UUIDImportantIf you passed any extra environment files when you created the overcloud, pass them again here using the
-eoption to avoid making undesired changes to the overcloud. For more information, see Modifying the Overcloud Environment (from Director Installation and Usage).-
Wait until the stack completes its update. Monitor the stack update using the
heat stack-list --show-nestedcommand. Add new nodes to the director’s node pool and deploy them as Ceph Storage nodes. Use the
CephStorageCountparameter in theparameter_defaultsof your environment file (in this case,~/templates/storage-config.yaml) to define the total number of Ceph Storage nodes in the Overcloud. For example:parameter_defaults: ControllerCount: 3 OvercloudControlFlavor: control ComputeCount: 3 OvercloudComputeFlavor: compute CephStorageCount: 3 OvercloudCephStorageFlavor: ceph-storage CephMonCount: 3 OvercloudCephMonFlavor: ceph-monNoteSee Section 8.1, “Assigning Nodes and Flavors to Roles” for details on how to define the number of nodes per role.
After you update your environment file, re-deploy the overcloud as normal:
$ openstack overcloud deploy --templates -e ENVIRONMENT_FILESThe director provisions the new node and updates the entire stack with the new node’s details.
Log in to a Controller node as the
heat-adminuser and check the status of the Ceph Storage node. For example:[heat-admin@overcloud-controller-0 ~]$ sudo ceph status-
Confirm that the value in the
osdmapsection matches the number of desired nodes in your cluster. The Ceph Storage node you removed has now been replaced with a new node.
11.3. Adding an OSD to a Ceph Storage node リンクのコピーリンクがクリップボードにコピーされました!
This procedure demonstrates how to add an OSD to a node.
Procedure
Notice the following heat template deploys Ceph Storage with three OSD devices:
parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestoreTo add an OSD, update the node disk layout as described in Section 6.1, “Mapping the Ceph Storage Node Disk Layout”. In this example, add
/dev/sdeto the template:parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde osd_scenario: lvm osd_objectstore: bluestore-
Run
openstack overcloud deployto update the overcloud.
This example assumes that all hosts with OSDs have a new device called /dev/sde. If you do not want all nodes to have the new device, update the heat template as shown and see Section 6.3, “Mapping the Disk Layout to Non-Homogeneous Ceph Storage Nodes” for information about how to define hosts with a differing devices list.
11.4. Removing an OSD from a Ceph Storage node リンクのコピーリンクがクリップボードにコピーされました!
This procedure demonstrates how to remove an OSD from a node. It assumes the following about the environment:
-
A server (
ceph-storage0) has an OSD (ceph-osd@4) running on/dev/sde. -
The Ceph monitor service (
ceph-mon) is running oncontroller0. - There are enough available OSDs to ensure the storage cluster is not at its near-full ratio.
Procedure
-
SSH into
ceph-storage0and log in asroot. Disable and stop the OSD service:
[root@ceph-storage0 ~]# systemctl disable ceph-osd@4 [root@ceph-stoarge0 ~]# systemctl stop ceph-osd@4-
Disconnect from
ceph-storage0. -
SSH into
controller0and log in asroot. Identify the name of the Ceph monitor container:
[root@controller0 ~]# docker ps | grep ceph-mon ceph-mon-controller0 [root@controller0 ~]#Enable the Ceph monitor container to mark the undesired OSD as
out:[root@controller0 ~]# docker exec ceph-mon-controller0 ceph osd out 4NoteThis command causes Ceph to rebalance the storage cluster and copy data to other OSDs in the cluster. The cluster temporarily leaves the
active+cleanstate until rebalancing is complete.Run the following command and wait for the storage cluster state to become
active+clean:[root@controller0 ~]# docker exec ceph-mon-controller0 ceph -wRemove the OSD from the CRUSH map so that it no longer receives data:
[root@controller0 ~]# docker exec ceph-mon-controller0 ceph osd crush remove osd.4Remove the OSD authentication key:
[root@controller0 ~]# docker exec ceph-mon-controller0 ceph auth del osd.4Remove the OSD:
[root@controller0 ~]# docker exec ceph-mon-controller0 ceph osd rm 4-
Disconnect from
controller0. -
SSH into the undercloud as the
stackuser and locate the heat environment file in which you defined theCephAnsibleDisksConfigparameter. Notice the heat template contains four OSDs:
parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde osd_scenario: lvm osd_objectstore: bluestoreModify the template to remove
/dev/sde.parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestoreRun
openstack overcloud deployto update the overcloud.NoteThis example assumes that you removed the
/dev/sdedevice from all hosts with OSDs. If you do not remove the same device from all nodes, update the heat template as shown and see Section 6.3, “Mapping the Disk Layout to Non-Homogeneous Ceph Storage Nodes” for information about how to define hosts with a differingdeviceslist.
11.5. Handling disk failure リンクのコピーリンクがクリップボードにコピーされました!
If a disk fails, see Handling a Disk Failure in the Red Hat Ceph Storage Operations Guide.