此内容没有您所选择的语言版本。

Chapter 3. Upgrading the Storage Cluster


To keep your administration server and your Ceph Storage cluster running optimally, upgrade them when Red Hat provides bug fixes or delivers major updates.

There is only one supported upgrade path to upgrade your cluster to the latest 1.3 version:

Note

If your cluster nodes run Ubuntu Precise 12.04, you must upgrade your operating systems to Ubuntu Trusty 14.04. Red Hat Ceph Storage 1.3 is only supported on Ubuntu Trusty. Please see the separate Upgrade Ceph Cluster on Ubuntu Precise to Ubuntu Trusty document if your cluster is running on Ubuntu Precise.

3.1. Upgrading 1.3.x to 1.3.3

There are two ways to upgrade Red Hat Ceph Storage 1.3.2 to 1.3.3:

  • CDN or online-based installations
  • ISO-based installations

For upgrading Ceph with an online or an ISO-based installation method, Red Hat recommends upgrading in the following order:

  • Administration Node
  • Monitor Nodes
  • OSD Nodes
  • Object Gateway Nodes
Important

Due to changes in encoding of the OSD map in the ceph package version 0.94.7, upgrading Monitor nodes to Red Hat Ceph Storage 1.3.3 before OSD nodes can lead to serious performance issues on large clusters that contain hundreds of OSDs.

To work around this issue, upgrade the OSD nodes before the Monitor nodes when upgrading to Red Hat Ceph Storage 1.3.3 from previous versions.

3.1.1. Administration Node

Using the Online Repositories

To upgrade admin node, remove Calamari, Installer, and Tools repositories under /etc/apt/sources.list.d/, remove cephdeploy.conf from the working directory, for example /home/example/ceph/, remove .cephdeploy.conf from the home directory, set Installer (ceph-deploy) online repository, upgrade ceph-deploy, enable Calamari and Tools online repositories, upgrade calamari-server, calamari-clients, re-initialize Calamari/Salt and upgrade Ceph.

  1. Remove existing Ceph repositories:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf Calamari.list Installer.list Tools.list
    Copy to Clipboard Toggle word wrap
  2. Remove existing cephdeploy.conf file from the Ceph working directory:

    Syntax

    # rm -rf <directory>/cephdeploy.conf
    Copy to Clipboard Toggle word wrap

    Example

    # rm -rf /home/example/ceph/cephdeploy.conf
    Copy to Clipboard Toggle word wrap

  3. Remove existing .cephdeploy.conf file from the home directory:

    Syntax

    $ rm -rf <directory>/.cephdeploy.conf
    Copy to Clipboard Toggle word wrap

    Example

    # rm -rf /home/example/ceph/.cephdeploy.conf
    Copy to Clipboard Toggle word wrap

  4. Set the Installer (ceph-deploy) repository then use ceph-deploy to enable the Calamari and Tools repositories.:

    $ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Installer $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Installer.list'
    $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
    $ sudo apt-get update
    $ sudo apt-get install ceph-deploy
    $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Calamari' Calamari `hostname -f`
    $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools' Tools `hostname -f`
    $ sudo apt-get update
    Copy to Clipboard Toggle word wrap
  5. Upgrade Calamari:

    $ sudo apt-get install calamari-server calamari-clients
    Copy to Clipboard Toggle word wrap
  6. Re-initialize Calamari:

    $ sudo calamari-ctl initialize
    Copy to Clipboard Toggle word wrap
  7. Update existing cluster nodes that report to Calamari:

    $ sudo salt '*' state.highstate
    Copy to Clipboard Toggle word wrap
  8. Upgrade Ceph:

    $ ceph-deploy install --no-adjust-repos --cli  <admin-node>
    $ sudo apt-get upgrade
    $ sudo restart ceph-all
    Copy to Clipboard Toggle word wrap

Using an ISO

To upgrade admin node, remove Calamari, Installer, and Tools repositories under /etc/apt/sources.list.d/, remove cephdeploy.conf from the working directory, for example ceph-config, remove .cephdeploy.conf from the home directory, download and mount the latest Ceph ISO, run ice_setup, re-initialize Calamari and upgrade Ceph.

Important

To support upgrading the other Ceph daemons, you must upgrade the Administration node first.

  1. Remove existing Ceph repositories:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf Calamari.list Installer.list Tools.list
    Copy to Clipboard Toggle word wrap
  2. Remove existing cephdeploy.conf file from the Ceph working directory:

    Syntax

    $ rm -rf <directory>/cephdeploy.conf
    Copy to Clipboard Toggle word wrap

    Example

    $ rm -rf /home/example/ceph/cephdeploy.conf
    Copy to Clipboard Toggle word wrap

  3. Remove existing .cephdeploy.conf file from the home directory:

    Syntax

    $ rm -rf <directory>/.cephdeploy.conf
    Copy to Clipboard Toggle word wrap

    Example

    $ rm -rf /home/example/ceph/.cephdeploy.conf
    Copy to Clipboard Toggle word wrap

  4. Visit the Red Hat Customer Portal to obtain the Red Hat Ceph Storage ISO image file.
  5. Download rhceph-1.3.3-ubuntu-x86_64-dvd.iso file.
  6. Using sudo, mount the image:

    $ sudo mount /<path_to_iso>/rhceph-1.3.3-ubuntu-x86_64-dvd.iso /mnt
    Copy to Clipboard Toggle word wrap
  7. Using sudo, install the setup program:

    $ sudo dpkg -i /mnt/ice-setup_*.deb
    Copy to Clipboard Toggle word wrap
    Note

    if you receive an error about missing python-pkg-resources, run sudo apt-get -f install to install the missing python-pkg-resources dependency.

  8. Navigate to the working directory:

    $ cd ~/ceph-config
    Copy to Clipboard Toggle word wrap
  9. Using sudo, run the setup script in the working directory:

    $ sudo ice_setup -d /mnt
    Copy to Clipboard Toggle word wrap

    The ice_setup program will install upgraded version of ceph-deploy, calamari-server, calamari-clients, create new local repositories and a .cephdeploy.conf file.

  10. Initialize Calamari and update existing cluster nodes that report to Calamari:

    $ sudo calamari-ctl initialize
    $ sudo salt '*' state.highstate
    Copy to Clipboard Toggle word wrap
  11. Upgrade Ceph:

    $ ceph-deploy install --no-adjust-repos --cli <admin-node>
    $ sudo apt-get upgrade
    $ sudo restart ceph-all
    Copy to Clipboard Toggle word wrap

3.1.2. Monitor Nodes

To upgrade a Monitor node, log in to the node, remove ceph-mon repository under /etc/apt/sources.list.d/, install online repository for Monitor from the admin node, re-install Ceph and reconnect Monitor node to Calamari. Finally, upgrade and restart the Ceph Monitor daemon.

Important

Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.

Online Repository

  1. Remove existing Ceph repositories in Monitor node:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf ceph-mon.list
    Copy to Clipboard Toggle word wrap
  2. Set online Monitor repository in Monitor node from admin node:

    $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/MON' --gpg-url https://www.redhat.com/security/fd431d51.txt ceph-mon <monitor-node>
    Copy to Clipboard Toggle word wrap
  3. Reinstall Ceph in Monitor node from the admin node:

    $ ceph-deploy install --no-adjust-repos --mon <monitor-node>
    Copy to Clipboard Toggle word wrap
    Note

    You need to specify --no-adjust-repos with ceph-deploy so that ceph-deploy does not create ceph.list file on Monitor node.

  4. Reconnect the Monitor node to Calamari. From the admin node, execute:

    $ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <monitor-node>
    Copy to Clipboard Toggle word wrap
  5. Upgrade and restart the Ceph Monitor daemon. From the Monitor node, execute:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo restart ceph-mon id={hostname}
    Copy to Clipboard Toggle word wrap

Using an ISO

To upgrade a Monitor node, log in to the node, remove ceph-mon repository under /etc/apt/sources.list.d/, re-install Ceph from the administration node and reconnect Monitor node to Calamari. Finally, upgrade and restart the monitor daemon.

Important

Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.

  1. Execute on the Monitor node:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf ceph-mon.list
    Copy to Clipboard Toggle word wrap
  2. From the administration node, execute:

    $ ceph-deploy repo ceph-mon <monitor-node>
    $ ceph-deploy install --no-adjust-repos --mon <monitor-node>
    Copy to Clipboard Toggle word wrap
  3. Reconnect the Monitor node to Calamari. From the administration node, execute:

    $ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <monitor-node>
    Copy to Clipboard Toggle word wrap
  4. Upgrade and restart Ceph Monitor daemon. From the Monitor node, execute:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo restart ceph-mon id={hostname}
    Copy to Clipboard Toggle word wrap

3.1.3. OSD Nodes

To upgrade a Ceph OSD node, reinstall the OSD daemon from the administration node, and reconnect OSD node to Calamari. Finally, upgrade the OSD node and restart the OSDs.

Important

Only upgrade one OSD node at a time, and preferably within a CRUSH hierarchy. Allow the OSDs to come up and in, and the cluster achieving the active + clean state, before proceeding to upgrade the next OSD node.

Before starting the upgrade of the OSD nodes, set the noout and the norebalance flags:

# ceph osd set noout
# ceph osd set norebalance
Copy to Clipboard Toggle word wrap

Once all the OSD nodes are upgraded in the storage cluster, unset the the noout and the norebalance flags:

# ceph osd unset noout
# ceph osd unset norebalance
Copy to Clipboard Toggle word wrap

Using the Online Repositories

  1. Remove existing Ceph repositories in the OSD node:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf ceph-osd.list
    Copy to Clipboard Toggle word wrap
  2. Set online OSD repository on OSD node from administration node:

    $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/OSD' --gpg-url https://www.redhat.com/security/fd431d51.txt ceph-osd <osd-node>
    Copy to Clipboard Toggle word wrap
  3. Reinstall Ceph on OSD node from the administration node:

    $ ceph-deploy install --no-adjust-repos --osd <osd-node>
    Copy to Clipboard Toggle word wrap
    Note

    You need to specify --no-adjust-repos with ceph-deploy so that ceph-deploy does not create ceph.list file on OSD node.

  4. Reconnect the OSD node to Calamari. From the administration node, execute:

    $ ceph-deploy calamari connect --master '<FQDN for the Calamari admin node>' <osd-node>
    Copy to Clipboard Toggle word wrap
  5. Update and restart the Ceph OSD daemon. From the OSD node, execute:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo restart ceph-osd id={id}
    Copy to Clipboard Toggle word wrap

Using an ISO

To upgrade a OSD node, log in to the node, remove ceph-osd repository under /etc/apt/sources.list.d/, re-install Ceph from the administration node and reconnect OSD node to Calamari. Finally, upgrade and restart the OSD daemon(s).

  1. Execute on the OSD node:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf ceph-osd.list
    Copy to Clipboard Toggle word wrap
  2. From the administration node, execute:

    $ ceph-deploy repo ceph-osd <osd-node>
    $ ceph-deploy install --no-adjust-repos --osd <osd-node>
    Copy to Clipboard Toggle word wrap
  3. Reconnect the OSD node to Calamari. From the administration node, execute:

    $ ceph-deploy calamari connect --master '<FQDN_Calamari_admin_node>' <osd-node>
    Copy to Clipboard Toggle word wrap
  4. Upgrade and restart the Ceph OSD daemon. From the OSD node, execute:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo restart ceph-osd id=<id>
    Copy to Clipboard Toggle word wrap

3.1.4. Object Gateway Nodes

To upgrade a Ceph Object Gateway node, log in to the node, remove ceph-mon or ceph-osd repository, whichever was installed for the radosgw package in Red Hat Ceph Storage 1.3.0 or 1.3.1, under /etc/apt/sources.list.d/, set the online Tools repository from the administration node, and re-install the Ceph Object Gateway daemon. Finally, upgrade and restart Ceph Object Gateway.

Using the Online Repositories

  1. Remove existing Ceph repository on the Object Gateway node:

    $ cd /etc/apt/sources.list.d/
    Copy to Clipboard Toggle word wrap
    $ sudo rm -rf ceph-mon.list
    Copy to Clipboard Toggle word wrap

    OR

    $ sudo rm -rf ceph-osd.list
    Copy to Clipboard Toggle word wrap
    Note

    For Red Hat Ceph Storage v1.3.1, you had to install either ceph-mon or ceph-osd repository for the radosgw package. Remove the repository that was previous installed before setting the Tools repository for Red Hat Ceph Storage v1.3.3.

    If upgrading from Red Hat Ceph Storage 1.3.2, then this step can be skipped.

  2. Set the online Tools repository from administration node:

    $ ceph-deploy repo --repo-url 'https://customername:customerpasswd@rhcs.download.redhat.com/ubuntu/1.3-updates/Tools' --gpg-url https://www.redhat.com/security/fd431d51.txt Tools <rgw-node>
    Copy to Clipboard Toggle word wrap
  3. Reinstall Object Gateway from the administration node:

    $ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
    Copy to Clipboard Toggle word wrap
  4. For federated deployments, from the Object Gateway node, execute:

    $ sudo apt-get install radosgw-agent
    Copy to Clipboard Toggle word wrap
  5. Upgrade and restart the Object Gateway:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo service radosgw restart id=rgw.<short-hostname>
    Copy to Clipboard Toggle word wrap
    Note

    If you modify the ceph.conf file for radosgw to run on port 80 then run sudo service apache2 stop before restarting the gateway.

Using an ISO

To upgrade a Ceph Object Gateway node, log in to the node, remove the ceph repository under /etc/apt/sources.list.d/, stop the Ceph Object Gateway daemon (radosgw) and stop the Apache/FastCGI instance. From the administration node, re-install the Ceph Object Gateway daemon. Finally, restart Ceph Object Gateway.

  1. Remove existing Ceph repository in the Ceph Object Gateway node:

    $ cd /etc/apt/sources.list.d/
    $ sudo rm -rf ceph.list
    Copy to Clipboard Toggle word wrap
  2. Stop Apache/Radosgw:

    $ sudo service apache2 stop
    $ sudo /etc/init.d/radosgw stop
    Copy to Clipboard Toggle word wrap
  3. From the administration node, execute:

    $ ceph-deploy repo ceph-mon <rgw-node>
    $ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
    Copy to Clipboard Toggle word wrap
    Note

    Both ceph-mon and ceph-osd repository contains the radosgw package. So, you can use anyone of them for the Object Gateway upgrade.

  4. For federated deployments, from the Ceph Object Gateway node, execute:

    $ sudo apt-get install radosgw-agent
    Copy to Clipboard Toggle word wrap
  5. Finally, from the Ceph Object Gateway node, restart the gateway:

    $ sudo service radosgw restart
    Copy to Clipboard Toggle word wrap

To upgrade a Ceph Object Gateway node, log in to the node and remove ceph-mon or ceph-osd repository under /etc/apt/sources.list.d/, whichever was previous installed for the radosgw package in Red Hat Cecph Storage 1.3.0. From the administration node, re-install the Ceph Object Gateway daemon. Finally, upgrade and restart Ceph Object Gateway.

  1. Remove existing Ceph repository in the Ceph Object Gateway node:

    $ cd /etc/apt/sources.list.d/
    Copy to Clipboard Toggle word wrap
    $ sudo rm -rf ceph-mon.list
    Copy to Clipboard Toggle word wrap

    OR

    $ sudo rm -rf ceph-osd.list
    Copy to Clipboard Toggle word wrap
    Note

    For Red Hat Ceph Storage v1.3.1, you had to install either ceph-mon or ceph-osd repository for the radosgw package. You have to remove the repository that was previous installed before setting the new repo for RHCS v1.3.2.

  2. From the administration node, execute:

    $ ceph-deploy repo ceph-mon <rgw-node>
    $ ceph-deploy install --no-adjust-repos --rgw <rgw-node>
    Copy to Clipboard Toggle word wrap
    Note

    Both ceph-mon and ceph-osd repo contains the radosgw package. So, you can use anyone of them for the gateway upgrade.

  3. For federated deployments, from the Object Gateway node, execute:

    $ sudo apt-get install radosgw-agent
    Copy to Clipboard Toggle word wrap
  4. Upgrade and restart the Object Gateway:

    $ sudo apt-get update
    $ sudo apt-get upgrade
    $ sudo service radosgw restart id=rgw.<short-hostname>
    Copy to Clipboard Toggle word wrap
    Note

    If you modify the ceph.conf file for radosgw to run on port 80 then run sudo service apache2 stop before restarting the gateway.

3.2. Reviewing CRUSH Tunables

If you have been using Ceph for a while and you are using an older CRUSH tunables setting such as bobtail, you should investigate and set your CRUSH tunables to optimal.

Note

Resetting your CRUSH tunables may result in significant rebalancing. See the Storage Strategies Guide, Chapter 9, Tunables for additional details on CRUSH tunables.

For example:

ceph osd crush tunables optimal
Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat