이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 11. Scaling the Ceph Storage cluster


11.1. Scaling up the Ceph Storage cluster

You can scale up the number of Ceph Storage nodes in your overcloud by re-running the deployment with the number of Ceph Storage nodes you need.

Before doing so, ensure that you have enough nodes for the updated deployment. These nodes must be registered with the director and tagged accordingly.

Registering New Ceph Storage Nodes

To register new Ceph storage nodes with the director, follow these steps:

  1. Log in to the undercloud as the stack user and initialize your director configuration:

    $ source ~/stackrc
    Copy to clipboard
  2. Define the hardware and power management details for the new nodes in a new node definition template; for example, instackenv-scale.json.
  3. Import this file to the OpenStack director:

    $ openstack overcloud node import ~/instackenv-scale.json
    Copy to clipboard

    Importing the node definition template registers each node defined there to the director.

  4. Assign the kernel and ramdisk images to all nodes:

    $ openstack overcloud node configure
    Copy to clipboard
Note

For more information about registering new nodes, see Section 2.2, “Registering nodes”.

Manually Tagging New Nodes

After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.

To inspect and tag new nodes, complete the following steps:

  1. Trigger hardware introspection to retrieve the hardware attributes of each node:

    $ openstack overcloud node introspect --all-manageable --provide
    Copy to clipboard
    • The --all-manageable option introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state.
    • The --provide option resets all nodes to an active state after introspection.

      Important

      Ensure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.

  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
    Copy to clipboard
  3. Add a profile option to the properties/capabilities parameter for each node to manually tag a node to a specific profile. The addition of the profile option tags the nodes into each respective profile.

    Note

    As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.

    For example, the following commands tag three additional nodes with the ceph-storage profile:

    $ openstack baremetal node set 551d81f5-4df2-4e0f-93da-6c5de0b868f7 --property capabilities="profile:ceph-storage,boot_option:local"
    $ openstack baremetal node set 5e735154-bd6b-42dd-9cc2-b6195c4196d7 --property capabilities="profile:ceph-storage,boot_option:local"
    $ openstack baremetal node set 1a2b090c-299d-4c20-a25d-57dd21a7085b --property capabilities="profile:ceph-storage,boot_option:local"
    Copy to clipboard
Tip

If the nodes you just tagged and registered use multiple disks, you can set the director to use a specific root disk on each node. See Section 2.4, “Defining the root disk for multi-disk clusters” for instructions on how to do so.

Re-deploying the Overcloud with Additional Ceph Storage Nodes

After registering and tagging the new nodes, you can now scale up the number of Ceph Storage nodes by re-deploying the overcloud. When you do, set the CephStorageCount parameter in the parameter_defaults of your environment file (in this case, ~/templates/storage-config.yaml). In Section 7.1, “Assigning nodes and flavors to roles”, the overcloud is configured to deploy with 3 Ceph Storage nodes. To scale it up to 6 nodes instead, use:

parameter_defaults:
  ControllerCount: 3
  OvercloudControlFlavor: control
  ComputeCount: 3
  OvercloudComputeFlavor: compute
  CephStorageCount: 6
  OvercloudCephStorageFlavor: ceph-storage
  CephMonCount: 3
  OvercloudCephMonFlavor: ceph-mon
Copy to clipboard

Upon re-deployment with this setting, the overcloud should now have 6 Ceph Storage nodes instead of 3.

11.2. Scaling down and replacing Ceph Storage nodes

In some cases, you might need to scale down your Ceph cluster, or even replace a Ceph Storage node, for example, if a Ceph Storage node is faulty. In either situation, you must disable and rebalance any Ceph Storage node that you want to remove from the overcloud to avoid data loss.

Note

This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Starting, stopping, and restarting Ceph daemons that run in containers and Removing a Ceph OSD using the command-line interface.

Procedure

  1. Log in to a Controller node as the heat-admin user. The director’s stack user has an SSH key to access the heat-admin user.
  2. List the OSD tree and find the OSDs for your node. For example, the node you want to remove might contain the following OSDs:

    -2 0.09998     host overcloud-cephstorage-0
    0 0.04999         osd.0                         up  1.00000          1.00000
    1 0.04999         osd.1                         up  1.00000          1.00000
    Copy to clipboard
  3. Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd out 1
    Copy to clipboard
  4. The Ceph Storage cluster begins rebalancing. Wait for this process to complete. Follow the status by using the following command:

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph -w
    Copy to clipboard
  5. After the Ceph cluster completes rebalancing, log in to the Ceph Storage node you are removing, in this case overcloud-cephstorage-0, as the heat-admin user and stop the node.

    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl disable ceph-osd@0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl disable ceph-osd@1
    Copy to clipboard
  6. Stop the OSDs.

    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo systemctl stop ceph-osd@1
    Copy to clipboard
  7. While logged in to the Controller node, remove the OSDs from the CRUSH map so that they no longer receive data.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd crush remove osd.1
    Copy to clipboard
  8. Remove the OSD authentication key.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph auth del osd.1
    Copy to clipboard
  9. Remove the OSD from the cluster.

    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 0
    [heat-admin@overcloud-controller-0 ~]$ sudo podman exec ceph-mon-<HOSTNAME> ceph osd rm 1
    Copy to clipboard
  10. Leave the node and return to the undercloud as the stack user.

    [heat-admin@overcloud-controller-0 ~]$ exit
    [stack@director ~]$
    Copy to clipboard
  11. Disable the Ceph Storage node so that director does not reprovision it.

    [stack@director ~]$ openstack baremetal node list
    [stack@director ~]$ openstack baremetal node maintenance set UUID
    Copy to clipboard
  12. Removing a Ceph Storage node requires an update to the overcloud stack in director with the local template files. First identify the UUID of the overcloud stack:

    $ openstack stack list
    Copy to clipboard
  13. Identify the UUIDs of the Ceph Storage node you want to delete:

    $ openstack server list
    Copy to clipboard
  14. Delete the node from the stack and update the plan accordingly:

    $ openstack overcloud node delete --stack overcloud <NODE_UUID>
    Copy to clipboard
    Important

    If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the overcloud environment in the Director Installation and Usage guide.

  15. Wait until the stack completes its update. Use the heat stack-list --show-nested command to monitor the stack update.
  16. Add new nodes to the director node pool and deploy them as Ceph Storage nodes. Use the CephStorageCount parameter in the parameter_defaults of your environment file, in this case, ~/templates/storage-config.yaml, to define the total number of Ceph Storage nodes in the overcloud.

    parameter_defaults:
      ControllerCount: 3
      OvercloudControlFlavor: control
      ComputeCount: 3
      OvercloudComputeFlavor: compute
      CephStorageCount: 3
      OvercloudCephStorageFlavor: ceph-storage
      CephMonCount: 3
      OvercloudCephMonFlavor: ceph-mon
    Copy to clipboard
    Note

    For more information about how to define the number of nodes per role, see Section 7.1, “Assigning nodes and flavors to roles”.

  17. After you update your environment file, redeploy the overcloud:

    $ openstack overcloud deploy --templates -e <ENVIRONMENT_FILE>
    Copy to clipboard

    Director provisions the new node and updates the entire stack with the details of the new node.

  18. Log in to a Controller node as the heat-admin user and check the status of the Ceph Storage node:

    [heat-admin@overcloud-controller-0 ~]$ sudo ceph status
    Copy to clipboard
  19. Confirm that the value in the osdmap section matches the number of nodes in your cluster that you want. The Ceph Storage node that you removed is replaced with a new node.

11.3. Adding an OSD to a Ceph Storage node

This procedure demonstrates how to add an OSD to a node. For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide.

Procedure

  1. Notice the following heat template deploys Ceph Storage with three OSD devices:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
        osd_scenario: lvm
        osd_objectstore: bluestore
    Copy to clipboard
  2. To add an OSD, update the node disk layout as described in Section 5.3, “Mapping the Ceph Storage node disk layout”. In this example, add /dev/sde to the template:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
          - /dev/sde
        osd_scenario: lvm
        osd_objectstore: bluestore
    Copy to clipboard
  3. Run openstack overcloud deploy to update the overcloud.
Note

This example assumes that all hosts with OSDs have a new device called /dev/sde. If you do not want all nodes to have the new device, update the heat template. For for information about how to define hosts with a differing devices list, see Section 5.5, “Mapping the disk layout to non-homogeneous Ceph Storage nodes”.

11.4. Removing an OSD from a Ceph Storage node

This procedure demonstrates how to remove an OSD from a node. It assumes the following about the environment:

  • A server (ceph-storage0) has an OSD (ceph-osd@4) running on /dev/sde.
  • The Ceph monitor service (ceph-mon) is running on controller0.
  • There are enough available OSDs to ensure the storage cluster is not at its near-full ratio.

For more information about Ceph OSDs, see Ceph OSDs in the Red Hat Ceph Storage Operations Guide.

Procedure

  1. SSH into ceph-storage0 and log in as root.
  2. Disable and stop the OSD service:

    [root@ceph-storage0 ~]# systemctl disable ceph-osd@4
    [root@ceph-stoarge0 ~]# systemctl stop ceph-osd@4
    Copy to clipboard
  3. Disconnect from ceph-storage0.
  4. SSH into controller0 and log in as root.
  5. Identify the name of the Ceph monitor container:

    [root@controller0 ~]# podman ps | grep ceph-mon
    ceph-mon-controller0
    [root@controller0 ~]#
    Copy to clipboard
  6. Enable the Ceph monitor container to mark the undesired OSD as out:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd out 4
    Copy to clipboard
    Note

    This command causes Ceph to rebalance the storage cluster and copy data to other OSDs in the cluster. The cluster temporarily leaves the active+clean state until rebalancing is complete.

  7. Run the following command and wait for the storage cluster state to become active+clean:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph -w
    Copy to clipboard
  8. Remove the OSD from the CRUSH map so that it no longer receives data:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd crush remove osd.4
    Copy to clipboard
  9. Remove the OSD authentication key:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph auth del osd.4
    Copy to clipboard
  10. Remove the OSD:

    [root@controller0 ~]# podman exec ceph-mon-controller0 ceph osd rm 4
    Copy to clipboard
  11. Disconnect from controller0.
  12. SSH into the undercloud as the stack user and locate the heat environment file in which you defined the CephAnsibleDisksConfig parameter.
  13. Notice the heat template contains four OSDs:

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
          - /dev/sde
        osd_scenario: lvm
        osd_objectstore: bluestore
    Copy to clipboard
  14. Modify the template to remove /dev/sde.

    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/sdb
          - /dev/sdc
          - /dev/sdd
        osd_scenario: lvm
        osd_objectstore: bluestore
    Copy to clipboard
  15. Run openstack overcloud deploy to update the overcloud.

    Note

    This example assumes that you removed the /dev/sde device from all hosts with OSDs. If you do not remove the same device from all nodes, update the heat template. For for information about how to define hosts with a differing devices list, see Section 5.5, “Mapping the disk layout to non-homogeneous Ceph Storage nodes”.

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat, Inc.