Chapter 4. Upgrading the Storage Cluster
To keep your administration server and your Ceph Storage cluster running optimally, upgrade them when Red Hat provides bug fixes or delivers major updates.
There is only one supported upgrade path to upgrade your cluster to the latest 1.3 version:
For all the upgrade paths, SELinux needs to be in permissive
or disabled
mode in your cluster. With Red Hat Ceph Storage 1.3.2 or later, SELinux no longer needs to be in permissive
or disabled
mode and can be run in its default mode, that is enforcing
. See Section 2.6, “Install ceph-selinux
” for more information.
4.1. Upgrading 1.3.2 to 1.3.3
There are two ways to upgrade Red Hat Ceph Storage 1.3.2 to 1.3.3:
- CDN or online-based installations
- ISO-based installations
For upgrading Ceph with the CDN or an ISO-based installation method, Red Hat recommends upgrading in the following order:
- Administration Node
- Monitor Nodes
- OSD Nodes
- Object Gateway Nodes
Due to changes in encoding of the OSD map in the ceph
package version 0.94.7, upgrading Monitor nodes to Red Hat Ceph Storage 1.3.3 before OSD nodes can lead to serious performance issues on large clusters that contain hundreds of OSDs.
To work around this issue, upgrade the OSD nodes before the Monitor nodes when upgrading to Red Hat Ceph Storage 1.3.3 from previous versions.
4.1.1. Administration Node
Using CDN
To upgrade the administration node, reinstall the ceph-deploy
package, upgrade the Calamari server, reinitialize the Calamari and Salt services, and upgrade Ceph. Finally, upgrade the administration node.
To do so, run the following commands as root
:
# yum install ceph-deploy # yum install calamari-server calamari-clients # calamari-ctl initialize # salt '*' state.highstate # ceph-deploy install --cli <administration-node> # yum update
The repositories are the same as in Red Hat Ceph Storage 1.3. These repositories include all the updated packages.
Using an ISO
For ISO-based upgrades, remove Calamari
, Installer
, and Tools
repositories from the /etc/yum.repos.d/
directory, remove the cephdeploy.conf
file from the Ceph working directory, for example ~/ceph-config/
, remove the .cephdeploy.conf
file from the home directory, download and mount the latest Red Hat Ceph Storage 1.3 ISO image, run the ice_setup
utility, reinitialize the Calamari service, and upgrade Ceph. Finally, upgrade the administration node.
Remove the Ceph related repositories from
/etc/yum.repos.d/
. Execute the following commands asroot
:# cd /etc/yum.repos.d # rm -rf Calamari.repo Installer.repo Tools.repo
Remove the existing
cephdeploy.conf
file from the Ceph working directory:Syntax
# rm -rf <directory>/cephdeploy.conf
Example
# rm -rf /home/example/ceph/cephdeploy.conf
Remove the existing
.cephdeploy.conf
file from the home directory:Syntax
$ rm -rf <directory>/.cephdeploy.conf
Example
# rm -rf /home/example/ceph/.cephdeploy.conf
To install the ceph-deploy
utility and Calamari using an ISO image, visit the Software & Download Center on the Red Hat Customer Portal to download the latest Red Hat Ceph Storage installation ISO image file.
Mount the ISO by using the following command as
root
:# mount /<path_to_iso>/rhceph-1.3.3-rhel-7-x86_64-dvd.iso /mnt
Install the setup program by running the following command as
root
:# yum install /mnt/Installer/ice_setup-*.rpm
Change to the Ceph working directory. For example:
# cd /home/example/ceph
Run the
ice_setup
utility asroot
:# ice_setup -d /mnt
The
ice_setup
program installs the upgraded version of theceph-deploy
utility and the Calamari server, creates new local repositories, and a new.cephdeploy.conf
file.Restart Calamari and set Salt to
state.highstate
by running the following commands asroot
:# calamari-ctl initialize # salt '*' state.highstate
Upgrade Ceph:
# ceph-deploy install --cli <administration-node>
Finally, update the administration node:
# yum update
Optionally, enable the Tools repository:
# subscription-manager repos --enable=rhel-7-server-rhceph-1.3-tools-rpms
NoteIt is mandatory to enable the
rhel-7-server-rhceph-1.3-tools-rpms
repository to obtain theceph-common
package. The Red Hat Ceph Storage ISO image does not include thetools
repository packages.
You can also enable the administration node to receive online updates and publish them to the rest of the cluster with the ice_setup update
command. To do so, execute the following commands as root
:
# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhceph-1.3-calamari-rpms --enable=rhel-7-server-rhceph-1.3-installer-rpms --enable=rhel-7-server-rhceph-1.3-mon-rpms --enable=rhel-7-server-rhceph-1.3-osd-rpms --enable=rhel-7-server-rhceph-1.3-tools-rpms # yum update
4.1.2. Monitor Nodes
To upgrade a Monitor node, reinstall the Monitor daemon from the administration node and reconnect the Monitor node to Calamari. Finally, upgrade and restart the Monitor daemon.
Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.
Using CDN
From the administration node, execute:
# ceph-deploy install --mon <monitor-node> # ceph-deploy calamari connect --master '<FQDN for the Calamari administration node>' <monitor-node>
Upgrade and restart the Monitor daemon. From the Monitor node, execute the following commands as
root
:# yum update # /etc/init.d/ceph [options] restart mon.[id]
The repositories are the same as in Red Hat Ceph Storage 1.3. These repositories include all the updated packages.
Using an ISO
To upgrade a Monitor node, log in to the node and stop the Monitor daemon. Remove the ceph-mon
repository from the /etc/yum.repos.d/
directory. Then, reinstall the Ceph Monitor daemon from the administration node and reconnect the Monitor node with Calamari. Finally, update the Monitor node and restart the Monitor daemon.
Only upgrade one Monitor node at a time, and allow the Monitor to come up and in, rejoining the Monitor quorum, before proceeding to upgrade the next Monitor.
-
On the Monitor node, execute the following commands as
root
:
# /etc/init.d/ceph [options] stop mon.[id] # rm /etc/yum.repos.d/ceph-mon.repo
- From the administration node, execute:
# ceph-deploy install --repo --release=ceph-mon <monitor-node> # ceph-deploy install --mon <monitor-node> # ceph-deploy calamari connect --master '<FQDN for the Calamari administration node>' <monitor-node>
-
From the Monitor node, update to the latest packages and start the Ceph Monitor daemon. Run the following commands as
root
:
# yum update # /etc/init.d/ceph [options] restart mon.[id]
4.1.3. OSD Nodes
To upgrade a Ceph OSD node, reinstall the OSD daemon from the administration node, and reconnect OSD node to Calamari. Finally, upgrade the OSD node and restart the OSDs.
Only upgrade one OSD node at a time, and preferably within a CRUSH hierarchy. Allow the OSDs to come up and in, and the cluster achieving the active + clean
state, before proceeding to upgrade the next OSD node.
Before starting the upgrade of the OSD nodes, set the noout
and the norebalance
flags:
# ceph osd set noout # ceph osd set norebalance
Once all the OSD nodes are upgraded in the storage cluster, unset the the noout
and the norebalance
flags:
# ceph osd unset noout # ceph osd unset norebalance
Using CDN
From the administration node, execute:
# ceph-deploy install --osd <osd-node> # ceph-deploy calamari connect --master '<FQDN for the Calamari administration node>' <osd-node>
Finally, update the OSD node and restart the OSD daemon by running the following commands as
root
:# yum update # /etc/init.d/ceph [options] restart
The repositories are the same as in Red Hat Ceph Storage 1.3. These repositories include all the updated packages.
Using an ISO
On the OSD node, execute the following commands as
root
:# /etc/init.d/ceph [options] stop # rm /etc/yum.repos.d/ceph-osd.repo
From the administration node, execute:
# ceph-deploy install --repo --release=ceph-osd <osd-node> # ceph-deploy install --osd <osd-node> # ceph-deploy calamari connect --master '<FQDN for the Calamari administration node>' <osd-node>
From the OSD node, update to the latest packages and start the Ceph OSD daemons by running the following commands as
root
:# yum update # /etc/init.d/ceph [options] start
4.1.4. Object Gateway Nodes
To upgrade a Ceph Object Gateway node, reinstall the Ceph Object Gateway daemon from the administration node, upgrade the Ceph Object Gateway node, and restart the gateway.
Using CDN
- From the administration node, execute:
# ceph-deploy install --rgw <rgw-node>
-
For federated deployments, from the Ceph Object Gateway node, execute the following command as
root
:
# yum install radosgw-agent
-
Upgrade the Ceph Object Gateway node and restart the gateway by running the following commands as
root
:
# yum update # systemctl restart ceph-radosgw
The repositories are the same as in Red Hat Ceph Storage 1.3. These repositories include all the updated packages.
Using an ISO
The Ceph Object Gateway is not shipped with ISO installations. To upgrade the Ceph Object Gateway, the CDN-based installation must be used.