Chapter 8. Manually upgrading a Red Hat Ceph Storage cluster and operating system
Normally, using ceph-ansible, it is not possible to upgrade Red Hat Ceph Storage and Red Hat Enterprise Linux to a new major release at the same time. For example, if you are on Red Hat Enterprise Linux 7, using ceph-ansible, you must stay on that version. As a system administrator, you can do this manually, however.
Use this chapter to manually upgrade a Red Hat Ceph Storage cluster at version 4.1 or 3.3z6 running on Red Hat Enterprise Linux 7.9, to a Red Hat Ceph Storage cluster at version 4.2 running on Red Hat Enterprise Linux 8.4.
To upgrade a containerized Red Hat Ceph Storage cluster at version 3.x or 4.x to a version 4.2, see the following three sections, Supported Red Hat Ceph Storage upgrade scenarios, Preparing for an upgrade, and Upgrading the storage cluster using Ansible in the Red Hat Ceph Storage Installation Guide.
To migrate existing systemd templates, run docker-to-podman playbook:
[user@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/docker-to-podman.yml -i hosts
Where user is the Ansible user.
If a node is collocated with more than one daemon, follow the specific section in this chapter , for the daemons collocated in the node. For example a node collocated with the Ceph Monitor daemon and the OSD daemon:
see Manually upgrading Ceph Monitor nodes and their operating systems and Manually upgrading Ceph OSD nodes and their operating systems.
Manually upgrading Ceph OSD nodes and their operating systems will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.
8.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
8.2. Manually upgrading Ceph Monitor nodes and their operating systems Copy linkLink copied to clipboard!
As a system administrator, you can manually upgrade the Ceph Monitor software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Perform the procedure on only one Monitor node at a time. To prevent cluster access issues, ensure the current upgraded Monitor node has returned to normal operation prior to proceeding to the next node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Stop the monitor service:
Syntax
systemctl stop ceph-mon@MONITOR_IDReplace MONITOR_ID with the Monitor’s ID number.
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpmsDisable the mon repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-mon-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpmsDisable the mon repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-mon-rpms
-
Install the
leapputility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yesin/etc/ssh/sshd_config. Restart the OpenSSH SSH daemon:
[root@mon ~]# systemctl restart sshd.serviceRemove the iSCSI module from the Linux kernel:
[root@mon ~]# modprobe -r iscsi- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsEnable the mon repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms
Install the
ceph-monpackage:[root@mon ~]# dnf install ceph-monIf the manager service is colocated with the monitor service, install the
ceph-mgrpackage:[root@mon ~]# dnf install ceph-mgr-
Restore the
ceph-client-admin.keyringandceph.conffiles from a Monitor node which has not been upgraded yet or from a node that has already had those files restored. Switch any existing CRUSH buckets to the latest bucket type
straw2.# ceph osd getcrushmap -o backup-crushmap # ceph osd crush set-all-straw-buckets-to-straw2Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:
Enable the messenger v2 protocol,
msgr2:ceph mon enable-msgr2This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.
ImportantEnsure all the Ceph Monitors are upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4 before performing any further Ceph Monitor configuration.
Verify the status of the monitor:
ceph mon dumpNoteRunning nautilus OSDs does not bind to their v2 address automatically. They must be restarted.
For each host upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, update the
ceph.conffile to either not specify any monitor port or reference both the v2 and v1 addresses and ports. Import any configuration options inceph.conffile into the storage cluster’s configuration database.Example
[root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.confCheck the storage cluster’s configuration database.
Example
[root@mon ~]# ceph config dumpOptional: After upgrading to Red Hat Ceph Storage 4, create a minimal
ceph.conffile for each host:Example
[root@mon ~]# ceph config generate-minimal-conf > /etc/ceph/ceph.conf.new [root@mon ~]# mv /etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Install the
leveldbpackage:[root@mon ~]# dnf install leveldbStart the monitor service:
[root@mon ~]# systemctl start ceph-mon.targetIf the manager service is colocated with the monitor service, start the manager service too:
[root@mon ~]# systemctl start ceph-mgr.targetVerify the monitor service came back up and is in quorum.
[root@mon ~]# ceph -sOn the mon: line under services:, ensure the node is listed as in quorum and not as out of quorum.
Example
mon: 3 daemons, quorum ceph4-mon,ceph4-mon2,ceph4-mon3 (age 2h)If the manager service is colocated with the monitor service, verify it is up too:
[root@mon ~]# ceph -sLook for the manager’s node name on the mgr: line under services.
Example
mgr: ceph4-mon(active, since 2h), standbys: ceph4-mon3, ceph4-mon2- Repeat the above steps on all Monitor nodes until they have all been upgraded.
8.3. Manually upgrading Ceph OSD nodes and their operating systems Copy linkLink copied to clipboard!
As a system administrator, you can manually upgrade the Ceph OSD software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
This procedure should be performed for each OSD node in the Ceph cluster, but typically only for one OSD node at a time. A maximum of one failure domains worth of OSD nodes may be performed in parallel. For example, if per-rack replication is in use, one entire rack’s OSD nodes can be upgraded in parallel. To prevent data access issues, ensure the current OSD node’s OSDs have returned to normal operation and all of the cluster’s PGs are in the active+clean state prior to proceeding to the next OSD.
This procedure will not work with encrypted OSD partitions as the Leapp upgrade utility does not support upgrading with OSD encryption.
If the OSDs were created using ceph-disk, and are still managed by ceph-disk, you must use ceph-volume to take over management of them. This is covered in an optional step below.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.0
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Set the OSD
nooutflag to prevent OSDs from getting marked down during the migration:ceph osd set nooutSet the OSD
nobackfill,norecover,norrebalance,noscrubandnodeep-scrubflags to avoid unnecessary load on the cluster and to avoid any data reshuffling when the node goes down for migration:ceph osd set nobackfill ceph osd set norecover ceph osd set norebalance ceph osd set noscrub ceph osd set nodeep-scrubGracefully shut down all the OSD processes on the node:
[root@mon ~]# systemctl stop ceph-osd.targetIf using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpmsDisable the osd repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-osd-rpms
If using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 repositories.
Disable the tools repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpmsDisable the osd repository:
[root@mon ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-osd-rpms
-
Install the
leapputility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yesin/etc/ssh/sshd_config. Restart the OpenSSH SSH daemon:
[root@mon ~]# systemctl restart sshd.serviceRemove the iSCSI module from the Linux kernel:
[root@mon ~]# modprobe -r iscsi- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsEnable the osd repository:
[root@mon ~]# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
Install the
ceph-osdpackage:[root@mon ~]# dnf install ceph-osdInstall the
leveldbpackage:[root@mon ~]# dnf install leveldb-
Restore the
ceph.conffile from a node which has not been upgraded yet or from a node that has already had those files restored. Unset the
noout,nobackfill,norecover,norrebalance,noscrubandnodeep-scrubflags:# ceph osd unset noout # ceph osd unset nobackfill # ceph osd unset norecover # ceph osd unset norebalance # ceph osd unset noscrub # ceph osd unset nodeep-scrubSwitch any existing CRUSH buckets to the latest bucket type
straw2.# ceph osd getcrushmap -o backup-crushmap # ceph osd crush set-all-straw-buckets-to-straw2Optional: If the OSDs were created using
ceph-disk, and are still managed byceph-disk, you must useceph-volumeto take over management of them.Mount each object storage device:
Syntax
/dev/DRIVE /var/lib/ceph/osd/ceph-OSD_IDReplace DRIVE with the storage device name and partition number.
Replace OSD_ID with the OSD ID.
Example
[root@mon ~]# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0Verify the ID_NUMBER is correct.
Syntax
cat /var/lib/ceph/osd/ceph-OSD_ID/whoamiReplace OSD_ID with the OSD ID.
Example
[root@mon ~]# cat /var/lib/ceph/osd/ceph-0/whoami 0Repeat the above steps for any additional object store devices.
Scan the newly mounted devices:
Syntax
ceph-volume simple scan /var/lib/ceph/osd/ceph-OSD_IDReplace OSD_ID with the OSD ID.
Example
[root@mon ~]# ceph-volume simple scan /var/lib/ceph/osd/ceph-0 stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected. Running command: /usr/sbin/cryptsetup status /dev/sdb1 --> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.json --> To take over management of this scanned OSD, and disable ceph-disk and udev, run: --> ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536baRepeat the above step for any additional object store devices.
Activate the device:
Syntax
ceph-volume simple activate OSD_ID UUIDReplace OSD_ID with the OSD ID and UUID with the UUID printed in the scan output from earlier.
Example
[root@mon ~]# ceph-volume simple activate 0 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba Running command: /usr/bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/ceph-0/journal Running command: /usr/bin/chown -R ceph:ceph /dev/sdb2 Running command: /usr/bin/systemctl enable ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@simple-0-0c9917f7-fce8-42aa-bdec-8c2cf2d536ba.service/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service --> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 --> Successfully activated OSD 0 with FSID 0c9917f7-fce8-42aa-bdec-8c2cf2d536ba Repeat the above step for any additional object store devices.
Optional: If your OSDs were created with
ceph-volumeand you did not complete the previous step, start the OSD service now:[root@mon ~]# systemctl start ceph-osd.targetActivate the OSDs:
BlueStore
[root@mon ~]# ceph-volume lvm activate --allVerify that the OSDs are
upandin, and that they are in theactive+cleanstate.[root@mon ~]# ceph -sOn the osd: line under services:, ensure that all OSDs are
upandin:Example
osd: 3 osds: 3 up (since 8s), 3 in (since 3M)- Repeat the above steps on all OSD nodes until they have all been upgraded.
If upgrading from Red Hat Ceph Storage 3, disallow pre-Nautilus OSDs and enable the Nautilus-only functionality:
[root@mon ~]# ceph osd require-osd-release nautilusNoteFailure to execute this step makes it impossible for OSDs to communicate after
msgrv2is enabled.Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps:
Enable the messenger v2 protocol,
msgr2:[root@mon ~]# ceph mon enable-msgr2This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300.
On every node, import any configuration options in
ceph.conffile into the storage cluster’s configuration database:Example
[root@mon ~]# ceph config assimilate-conf -i /etc/ceph/ceph.confNoteWhen you assimilate a config into your monitors, for example, if you have different config values set for the same set of options, the end result depends on the order in which the files are assimilated.
Check the storage cluster’s configuration database:
Example
[root@mon ~]# ceph config dump
8.4. Manually upgrading Ceph Object Gateway nodes and their operating systems Copy linkLink copied to clipboard!
As a system administrator, you can manually upgrade the Ceph Object Gateway (RGW) software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
This procedure should be performed for each RGW node in the Ceph cluster, but only for one RGW node at a time. Ensure the current upgraded RGW has returned to normal operation prior to proceeding to the next node to prevent any client access issues.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Stop the Ceph Object Gateway service:
# systemctl stop ceph-radosgw.targetIf using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tool repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpmsIf using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms-
Install the
leapputility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. - Run through the leapp preupgrade checks. See Assessing upgradability from the command line.
-
Set
PermitRootLogin yesin/etc/ssh/sshd_config. Restart the OpenSSH SSH daemon:
# systemctl restart sshd.serviceRemove the iSCSI module from the Linux kernel:
# modprobe -r iscsi- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsInstall the
ceph-radosgwpackage:# dnf install ceph-radosgw- Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories if needed.
Optional: Install the
leveldbpackage which is needed by other Ceph services.# dnf install leveldb-
Restore the
ceph-client-admin.keyringandceph.conffiles from a node which has not been upgraded yet or from a node that has already had those files restored. Start the RGW service:
# systemctl start ceph-radosgw.targetSwitch any existing CRUSH buckets to the latest bucket type
straw2.# ceph osd getcrushmap -o backup-crushmap # ceph osd crush set-all-straw-buckets-to-straw2Verify the daemon is active:
# ceph -sThere is an rgw: line under services:.
Example
rgw: 1 daemon active (jb-ceph4-rgw.rgw0)- Repeat the above steps on all Ceph Object Gateway nodes until they have all been upgraded.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.5. Manually upgrading the Ceph Dashboard node and its operating system Copy linkLink copied to clipboard!
As a system administrator, you can manually upgrade the Ceph Dashboard software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The node is running Red Hat Enterprise Linux 7.9.
- The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Uninstall the existing dashboard from the cluster.
Change to the
/usr/share/cephmetrics-ansibledirectory:# cd /usr/share/cephmetrics-ansibleRun the
purge.ymlAnsible playbook:# ansible-playbook -v purge.yml
If using Red Hat Ceph Storage 3, disable the Red Hat Ceph Storage 3 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpmsIf using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms-
Install the
leapputility. See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. -
Run through the
leapppreupgrade checks. See Assessing upgradability from the command line. -
Set
PermitRootLogin yesin/etc/ssh/sshd_config. Restart the OpenSSH SSH daemon:
# systemctl restart sshd.serviceRemove the iSCSI module from the Linux kernel:
# modprobe -r iscsi- Perform the upgrade by following Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the node.
Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsEnable the Ansible repository:
# subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms-
Configure
ceph-ansibleto manage the cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites. -
After you run
ansible-playbook site.ymlas a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
- See Installing dashboard using Ansible in the Dashboard guide for more information.
8.6. Manually upgrading Ceph Ansible nodes and reconfiguring settings Copy linkLink copied to clipboard!
Manually upgrade the Ceph Ansible software on a Red Hat Ceph Storage cluster node and the Red Hat Enterprise Linux operating system to a new major release at the same time. This procedure applies to both bare-metal and container deployments, unless specified.
Before upgrading hostOS on the Ceph Ansible node, take a backup of group_vars and hosts file. Use the created backup before re-configuring the Ceph Ansible node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The node is running Red Hat Enterprise Linux 7.9.
- The node is running Red Hat Ceph Storage version 3.3z6 or 4.1
- Access to the installation source for Red Hat Enterprise Linux 8.3.
Procedure
Enable the tools repository for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
[root@dashboard ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsEnable the Ansible repository:
[root@dashboard ~]# subscription-manager repos --enable=ansible-2.9-for-rhel-8-x86_64-rpms-
Configure
ceph-ansibleto manage the storage cluster. It will install the dashboard. Follow the instructions in Installing Red Hat Ceph Storage using Ansible, including the prerequisites. -
After you run
ansible-playbook site.ymlas a part of the above procedures, the URL for the dashboard will be printed. See Installing dashboard using Ansible in the Dashboard guide for more information on locating the URL and accessing the dashboard.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
- See Installing dashboard using Ansible in the Dashboard guide for more information.
8.7. Manually upgrading the Ceph File System Metadata Server nodes and their operating systems Copy linkLink copied to clipboard!
You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster and the Red Hat Enterprise Linux operating system to a new major release at the same time.
Before you upgrade the storage cluster, reduce the number of active MDS ranks to one per file system. This eliminates any possible version conflicts between multiple MDS. In addition, take all standby nodes offline before upgrading.
This is because the MDS cluster does not possess built-in versioning or file system flags. Without these features, multiple MDS might communicate using different versions of the MDS software, and could cause assertions or other faults to occur.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- The nodes are running Red Hat Enterprise Linux 7.9.
- The nodes are using Red Hat Ceph Storage version 3.3z6 or 4.1.
- Access to the installation source for Red Hat Enterprise Linux 8.3.
- Root-level access to all nodes in the storage cluster.
The underlying XFS filesystem must be formatted with ftype=1 or with d_type support. Run the command xfs_info /var to ensure the ftype is set to 1. If the value of ftype is not 1, attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers.
Starting with Red Hat Enterprise Linux 8, mkfs.xfs enables ftype=1 by default.
Procedure
Reduce the number of active MDS ranks to 1:
Syntax
ceph fs set FILE_SYSTEM_NAME max_mds 1Example
[root@mds ~]# ceph fs set fs1 max_mds 1Wait for the cluster to stop all of the MDS ranks. When all of the MDS have stopped, only rank 0 should be active. The rest should be in standby mode. Check the status of the file system:
[root@mds ~]# ceph statusUse
systemctlto take all standby MDS offline:[root@mds ~]# systemctl stop ceph-mds.targetConfirm that only one MDS is online, and that it has rank 0 for the file system:
[root@mds ~]# ceph statusDisable the tools repository for the operating system version:
If you are upgrading from Red Hat Ceph Storage 3 on RHEL 7, disable the Red Hat Ceph Storage 3 tools repository:
[root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpmsIf you are using Red Hat Ceph Storage 4, disable the Red Hat Ceph Storage 4 tools repository:
[root@mds ~]# subscription-manager repos --disable=rhel-7-server-rhceph-4-tools-rpms
-
Install the
leapputility. For more information aboutleapp, refer to Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. -
Run through the
leapppreupgrade checks. For more information, refer to Assessing upgradability from the command line. -
Edit
/etc/ssh/sshd_configand setPermitRootLogintoyes. Restart the OpenSSH SSH daemon:
[root@mds ~]# systemctl restart sshd.serviceRemove the iSCSI module from the Linux kernel:
[root@mds ~]# modprobe -r iscsi- Perform the upgrade. See Performing the upgrade from RHEL 7 to RHEL 8.
- Reboot the MDS node.
Enable the tools repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8:
[root@mds ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsInstall the
ceph-mdspackage:[root@mds ~]# dnf install ceph-mds -y- Optional: Install the packages for any Ceph services that are colocated on this node. Enable additional Ceph repositories, if needed.
Optional: Install the
leveldbpackage, which is needed by other Ceph services:[root@mds ~]# dnf install leveldb-
Restore the
ceph-client-admin.keyringandceph.conffiles from a node that has not been upgraded yet, or from a node that has already had those files restored. Switch any existing CRUSH buckets to the latest bucket type
straw2.# ceph osd getcrushmap -o backup-crushmap # ceph osd crush set-all-straw-buckets-to-straw2Start the MDS service:
[root@mds ~]# systemctl restart ceph-mds.targetVerify that the daemon is active:
[root@mds ~]# ceph -s- Follow the same processes for the standby daemons.
When you have finished restarting all of the MDS in standby, restore the previous value of
max_mdsfor your cluster:Syntax
ceph fs set FILE_SYSTEM_NAME max_mds ORIGINAL_VALUEExample
[root@mds ~]# ceph fs set fs1 max_mds 5
8.8. Recovering from an operating system upgrade failure on an OSD node Copy linkLink copied to clipboard!
As a system administrator, if you have a failure when using the procedure Manually upgrading Ceph OSD nodes and their operating systems, you can recover from the failure using the following procedure. In the procedure you will do a fresh install of Red Hat Enterprise Linux 8.4 on the node and still be able to recover the OSDs without any major backfilling of data besides the writes to the OSDs that were down while they were out.
DO NOT touch the media backing the OSDs or their respective wal.db or block.db databases.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- An OSD node that failed to upgrade.
- Access to the installation source for Red Hat Enterprise Linux 8.4.
Procedure
Perform a standard installation of Red Hat Enterprise Linux 8.4 on the failed node and enable the Red Hat Enterprise Linux repositories.
Enable the repositories for Red Hat Ceph Storage 4 for Red Hat Enterprise Linux 8.
Enable the tools repository:
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsEnable the osd repository:
# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms
Install the
ceph-osdpackage:# dnf install ceph-osd-
Restore the
ceph.conffile to/etc/cephfrom a node which has not been upgraded yet or from a node that has already had those files restored. Start the OSD service:
# systemctl start ceph-osd.targetActivate the object store devices:
ceph-volume lvm activate --allWatch the recovery of the OSDs and cluster backfill writes to recovered OSDs:
# ceph -wMonitor the output until all PGs are in state
active+clean.
Additional Resources
- See Manually upgrading a Red Hat Ceph Storage cluster and operating system in the Installation Guide for more information.
- See Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 for more information.
8.9. Additional Resources Copy linkLink copied to clipboard!
- If you do not need to upgrade the operating system to a new major release, see Upgrading a Red Hat Ceph Storage cluster.