Este conteúdo não está disponível no idioma selecionado.

8.5. Replacing Ceph Storage Nodes


A situation might occur when a Ceph Storage node fails. In this situation, you must ensure to disable and rebalance the faulty node before removing it from the Overcloud to ensure no data loss. This procedure explains the process for replacing a Ceph Storage node.

Note

This procedure uses steps from the Red Hat Ceph Storage Administration Guide to manually remove Ceph Storage nodes. For more in-depth information about manual removal of Ceph Storage nodes, see Chapter 15. Removing OSDs (Manual) from the Red Hat Ceph Storage Administration Guide.
  1. Log into either a Controller node or a Ceph Storage node as the heat-admin user. The director's stack user has an SSH key to access the heat-admin user.
  2. List the OSD tree and find the OSDs for your node. For example, your node to remove might contain the following OSDs:
    -2 0.09998     host overcloud-cephstorage-0
    0 0.04999         osd.0                         up  1.00000          1.00000
    1 0.04999         osd.1                         up  1.00000          1.00000
    
    Copy to Clipboard Toggle word wrap
  3. Disable the OSDs on the Ceph Storage node. In this case, the OSD IDs are 0 and 1.
    [heat-admin@overcloud-controller-0 ~]$ sudo ceph osd out 0
    [heat-admin@overcloud-controller-0 ~]$ sudo ceph osd out 1
    
    Copy to Clipboard Toggle word wrap
    The Ceph Storage cluster begins rebalancing. Wait for this process to complete. You can follow the status using the following command:
    [heat-admin@overcloud-controller-0 ~]$ sudo ceph -w
    
    Copy to Clipboard Toggle word wrap
  4. Once the Ceph cluster completes rebalancing, log into the faulty Ceph Storage node as the heat-admin user and stop the node.
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo /etc/init.d/ceph stop osd.0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo /etc/init.d/ceph stop osd.1
    
    Copy to Clipboard Toggle word wrap
  5. Remove the Ceph Storage node from the CRUSH map so that it no longer receives data.
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd crush remove osd.0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd crush remove osd.1
    
    Copy to Clipboard Toggle word wrap
  6. Remove the OSD authentication key.
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth del osd.0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph auth del osd.1
    
    Copy to Clipboard Toggle word wrap
  7. Remove the OSD from the cluster.
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd rm 0
    [heat-admin@overcloud-cephstorage-0 ~]$ sudo ceph osd rm 1
    
    Copy to Clipboard Toggle word wrap
  8. Leave the node and return to the director host as the stack user.
    [heat-admin@overcloud-cephstorage-0 ~]$ exit
    [stack@director ~]$
    
    Copy to Clipboard Toggle word wrap
  9. Disable the Ceph Storage node so the director does not reprovision it.
    [stack@director ~]$ ironic node-list
    [stack@director ~]$ ironic node-set-maintenance [UUID] true
    
    Copy to Clipboard Toggle word wrap
  10. Removing a Ceph Storage node requires an update to the overcloud stack in the director using the local template files. First identify the UUID of the Overcloud stack:
    $ heat stack-list
    
    Copy to Clipboard Toggle word wrap
    Identify the UUIDs of the Ceph Storage node to delete:
    $ nova list
    
    Copy to Clipboard Toggle word wrap
    Run the following command to delete the nodes from the stack and update the plan accordingly:
    $ openstack overcloud node delete --stack [STACK_UUID] --templates -e [ENVIRONMENT_FILE] [NODE1_UUID] [NODE2_UUID] [NODE3_UUID]
    
    Copy to Clipboard Toggle word wrap

    Important

    If you passed any extra environment files when you created the Overcloud, pass them again here using the -e or --environment-file option to avoid making undesired changes to the Overcloud.
    Wait until the stack completes its update. Monitor the stack update using the heat stack-list --show-nested.
  11. Follow the procedure in Section 8.1, “Adding Compute or Ceph Storage Nodes” to add new nodes to the director's node pool and deploy them as Ceph Storage nodes. Use the --ceph-storage-scale to define the total number of Ceph Storage nodes in the Overcloud. For example, if you removed a faulty node from a three node cluster and you want to replace it, use --ceph-storage-scale 3 to return the number of Ceph Storage nodes to its original value:
    $ openstack overcloud deploy --templates --ceph-storage-scale 3 -e [ENVIRONMENT_FILES]
    
    Copy to Clipboard Toggle word wrap

    Important

    If you passed any extra environment files when you created the Overcloud, pass them again here using the -e or --environment-file option to avoid making undesired changes to the Overcloud.
    The director provisions the new node and updates the entire stack with the new node's details
  12. Log into a Controller node as the heat-admin user and check the status of the Ceph Storage node. For example:
    [heat-admin@overcloud-controller-0 ~]$ sudo ceph status
    
    Copy to Clipboard Toggle word wrap
    Confirm that the value in the osdmap section matches the number of desired nodes in your cluster.
The failed Ceph Storage node has now been replaced with a new node.
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat