이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 9. Upgrading an overcloud with director-deployed Ceph deployments
If your environment includes director-deployed Red Hat Ceph Storage deployments with or without hyperconverged infrastructure (HCI) nodes, you must upgrade your deployments to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible.
If you are using the Red Hat Ceph Storage Object Gateway (RGW), ensure that all RGW pools have the application label rgw as described in Why are the RGW services crashing after running the cephadm adoption playbook?.
Implementing this configuration change addresses a common issue encountered when upgrading from Red Hat Ceph Storage Release 4 to 5.
9.1. Installing ceph-ansible 링크 복사링크가 클립보드에 복사되었습니다!
If you deployed Red Hat Ceph Storage using director, you must complete this procedure. The ceph-ansible package is required to upgrade Red Hat Ceph Storage with Red Hat OpenStack Platform.
Procedure
Enable the Ceph 5 Tools repository:
sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
[stack@director ~]$ sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-ansiblepackage:sudo dnf install -y ceph-ansible
[stack@director ~]$ sudo dnf install -y ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Downloading Red Hat Ceph Storage containers to the undercloud from Satellite 링크 복사링크가 클립보드에 복사되었습니다!
If the Red Hat Ceph Storage container image is hosted on a Red Hat Satellite Server, then you must download a copy of the image to the undercloud before starting the Red Hat Ceph Storage upgrade using Red Hat Satellite.
Prerequisite
- The required Red Hat Ceph Storage container image is hosted on the Satellite Server.
Procedure
-
Log in to the undercloud node as the
stackuser. Download the Red Hat Ceph Storage container image from the Satellite Server:
sudo podman pull <ceph_image_file>
$ sudo podman pull <ceph_image_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<ceph_image_file>with the Red Hat Ceph Storage container image file hosted on the Satellite Server. The following is an example of this command:sudo podman pull satellite.example.com/container-images-osp-17_1-rhceph-5-rhel8:latest
$ sudo podman pull satellite.example.com/container-images-osp-17_1-rhceph-5-rhel8:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Upgrading to Red Hat Ceph Storage 5 링크 복사링크가 클립보드에 복사되었습니다!
Upgrade the following nodes from Red Hat Ceph Storage version 4 to version 5:
- Red Hat Ceph Storage nodes
- Hyperconverged infrastructure (HCI) nodes, which contain combined Compute and Ceph OSD services
For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact.
Red Hat Ceph Storage 5 uses Prometheus v4.10, which has the following known issue: If you enable Red Hat Ceph Storage dashboard, two data sources are configured on the dashboard. For more information about this known issue, see BZ#2054852.
Red Hat Ceph Storage 6 uses Prometheus v4.12, which does not include this known issue. Red Hat recommends upgrading from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6 after the upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1 is complete. To upgrade from Red Hat Ceph Storage version 5 to version 6, begin with one of the following procedures for your environment:
-
Director-deployed Red Hat Ceph Storage environments: Updating the
cephadmclient - External Red Hat Ceph Storage cluster environments: Updating the Red Hat Ceph Storage container image
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the Red Hat Ceph Storage external upgrade process with the
cephtag:openstack overcloud external-upgrade run \ --skip-tags "ceph_ansible_remote_tmp" \ --stack <stack> \ --tags ceph,facts 2>&1
$ openstack overcloud external-upgrade run \ --skip-tags "ceph_ansible_remote_tmp" \ --stack <stack> \ --tags ceph,facts 2>&1Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<stack>with the name of your stack. -
If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansibleto the provided comma-separated list of values for the--skip-tagsparameter.
-
Replace
Run the
ceph versionscommand to confirm all Red Hat Ceph Storage daemons have been upgraded to version 5. This command is available in theceph monitorcontainer that is hosted by default on the Controller node.ImportantThe command in the previous step runs the
ceph-ansiblerolling_update.yamlplaybook to update the cluster from version 4 to 5. It is important to confirm all daemons have been updated before proceeding with this procedure.The following example demonstrates the use and output of this command. As demonstrated in the example, all daemons in your deployment should show a package version of
16.2.*and the keywordpacific.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe output of the command
sudo podman ps | grep cephon any server hosting Red Hat Ceph Storage should return a version 5 container.Create the
ceph-adminuser and distribute the appropriate keyrings:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the packages on the Red Hat Ceph Storage nodes:
openstack overcloud upgrade run \ --stack <stack> \ --skip-tags ceph_ansible_remote_tmp \ --tags setup_packages --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd \ --playbook /home/stack/overcloud-deploy/<stack>/config-download/<stack>/upgrade_steps_playbook.yaml 2>&1$ openstack overcloud upgrade run \ --stack <stack> \ --skip-tags ceph_ansible_remote_tmp \ --tags setup_packages --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd \ --playbook /home/stack/overcloud-deploy/<stack>/config-download/<stack>/upgrade_steps_playbook.yaml 2>&1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansibleto the provided comma-separated list of values for the--skip-tagsparameter.NoteBy default, the Ceph Monitor service (CephMon) runs on the Controller nodes unless you have used the composable roles feature to host them elsewhere. This command includes the
ceph_montag, which also updates the packages on the nodes hosting the Ceph Monitor service (the Controller nodes by default).
Configure the Red Hat Ceph Storage nodes to use
cephadm:openstack overcloud external-upgrade run \ --skip-tags ceph_ansible_remote_tmp \ --stack <stack> \ --tags cephadm_adopt 2>&1$ openstack overcloud external-upgrade run \ --skip-tags ceph_ansible_remote_tmp \ --stack <stack> \ --tags cephadm_adopt 2>&1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansibleto the provided comma-separated list of values for the--skip-tagsparameter.NoteThe adoption of
cephadmcan cause downtime in the RGW and Alertmanager services. For more information about these issues, see Restarting Red Hat Ceph Storage 5 services.
Run the
ceph -scommand to confirm all processes are now managed by Red Hat Ceph Storage orchestrator. This command is available in theceph monitorcontainer that is hosted by default on the Controller node.ImportantThe command in the previous step runs the
ceph-ansiblecephadm-adopt.yamlplaybook to move future management of the cluster fromceph-ansibletocephadmand the Red Hat Ceph Storage orchestrator. It is important to confirm all processes are now managed by the orcestrator before proceeding with this procedure.The following example demonstrates the use and output of this command. As demonstrated in this example, there are 63 daemons that are not managed by
cephadm. This indicates there was a problem with the running of theceph-ansiblecephadm-adopt.ymlplaybook. Contact Red Hat Ceph Storage support to troubleshoot these errors before proceeding with the upgrade. When the adoption process has been completed successfully, there should not be any warning about stray daemons not managed bycephadm.sudo cephadm shell -- ceph -s
$ sudo cephadm shell -- ceph -s cluster: id: f5a40da5-6d88-4315-9bb3-6b16df51d765 health: HEALTH_WARN 63 stray daemon(s) not managed by cephadmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
overcloud_upgrade_prepare.shfile to replace theceph-ansiblefile with acephadmheat environment file:ImportantDo not include
ceph-ansibleenvironment or deployment files, for example,environments/ceph-ansible/ceph-ansible.yamlordeployment/ceph-ansible/ceph-grafana.yaml, in openstack deployment commands such asopenstack overcloud upgrade prepareandopenstack overcloud deploy. For more information about replacingceph-ansibleenvironment and deployment files withcephadmfiles, see Implications of upgrading to Red Hat Ceph Storage 5.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <cephadm-file>
-
If you deployed RGW in a previous RHOSP version, or if you plan to deploy RGW, use
environments/cephadm/cephadm.yaml. -
If you plan to deploy RBD, use
environments/cephadm/cephadm-rbd-only.yaml.
-
If you deployed RGW in a previous RHOSP version, or if you plan to deploy RGW, use
Modify the
overcloud_upgrade_prepare.shfile to remove the following environment file if you added it earlier when you ran the overcloud upgrade preparation:-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Run the upgrade preparation command:
source stackrc chmod 755 /home/stack/overcloud_upgrade_prepare.sh
$ source stackrc $ chmod 755 /home/stack/overcloud_upgrade_prepare.sh sh /home/stack/overcloud_upgrade_prepare.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow If your deployment includes HCI nodes, create a temporary
hci.conffile in acephadmcontainer of a Controller node:Log in to a Controller node:
ssh cloud-admin@<controller_ip>
$ ssh cloud-admin@<controller_ip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<controller_ip>with the IP address of the Controller node.
-
Replace
Retrieve a
cephadmshell from the Controller node:Example
sudo cephadm shell
[cloud-admin@controller-0 ~]$ sudo cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
cephadmshell, create a temporaryhci.conffile:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration:
Example
[ceph: root@edpm-controller-0 /]# ceph config assimilate-conf -i hci.conf
[ceph: root@edpm-controller-0 /]# ceph config assimilate-conf -i hci.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about adjusting the configuration of your HCI deployment, see Ceph configuration overrides for HCI in Deploying a hyperconverged infrastructure.
You must upgrade the operating system on all HCI nodes to RHEL 9. For more information on upgrading Compute and HCI nodes, see Upgrading Compute nodes to RHEL 9.2.
If Red Hat Ceph Storage Rados Gateway (RGW) is used for object storage, complete the steps in Ceph config overrides set for the RGWs on the RHCS 4.x does not get reflected after the Upgrade to RHCS 5.x to ensure your Red Hat Ceph Storage 4 configuration is reflected completely in Red Hat Ceph Storage 5.
If the Red Hat Ceph Storage Dashboard is installed, complete the steps in After FFU 16.2 to 17.1, Ceph Grafana dashboard failed to start due to incorrect dashboard configuration to ensure it is properly configured.
9.4. Restarting Red Hat Ceph Storage 5 services 링크 복사링크가 클립보드에 복사되었습니다!
After the adoption from ceph-ansible to cephadm, the Alertmanager service (a component of the dashboard) or RGW might go offline. This is due to the following issues related to cephadm adoption in Red Hat Ceph Storage 5:
You can restart these services immediately by following the procedures in this section but doing so requires restarting the HAProxy service. Restarting the HAProxy service causes a brief service interruption of the Red Hat OpenStack Platform (RHOSP) control plane.
Do not perform the procedures in this section if any of the following statements are true:
- You do not deploy Red Hat Ceph Storage Dashboard or RGW.
- You do not immediately require the Alertmanager and RGW services.
- You do not want the control plane downtime caused by restarting HAProxy.
- You plan on upgrading to Red Hat Ceph Storage 6 before ending the maintenance window for the upgrade.
If any of these statements are true, proceed with the upgrade process as described in subsequent chapters and upgrade to Red Hat Ceph Storage 6 when you reach the section Upgrading Red Hat Ceph Storage 5 to 6. Complete all intervening steps in the upgrade process before attempting to upgrade to Release 6.
9.4.1. Restarting the Red Hat Ceph Storage 5 Object Gateway 링크 복사링크가 클립보드에 복사되었습니다!
After migrating from ceph-ansible to cephadm, you might have to restart the Red Hat Ceph Storage Object Gateway (RGW) before continuing with the process if it has not already restarted. RGW starts automatically when HAProxy is offline. Once RGW is online, HAproxy can be started. This is because RGW currently checks if ports are open for all IPs instead of only the IPs in use. When BZ#2356354 is resolved, RGW will only check the ports for the IPs in use.
Restarting the HAProxy service introduces downtime to the Red Hat OpenStack Platform control plane. The downtime lasts as long as is required for the HAProxy service to restart.
Procedure
Log in to the OpenStack Controller node.
NoteConfirm that you are logged into a node that is hosting the Ceph Manager service. In default deployments, this is the OpenStack Controller node. You must be logged into a Controller node that is running the Ceph Manager service.
Determine the current health of the Red Hat Ceph Storage 5 deployment:
sudo cephadm shell -- ceph health detail
# sudo cephadm shell -- ceph health detailCopy to Clipboard Copied! Toggle word wrap Toggle overflow Observe the command output.
If the output displays no issues with the deployment health, you do not have to complete this procedure. If the following error is displayed in the command output, you must proceed with restarting HAProxy to restart RGW:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the HAProxy service:
pcs resource disable haproxy-bundle
# pcs resource disable haproxy-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRGW should now restart automatically.
Confirm that RGW restarted:
sudo cephadm shell -- ceph orch ps
# sudo cephadm shell -- ceph orch psCopy to Clipboard Copied! Toggle word wrap Toggle overflow Observe the command output.
The following is an example of the command output confirming that all services are running:
rgw.host42.host42.qfeedh host42 10.0.42.20:8080 running (62s) 58s ago 62s 60.1M - 16.2.10-275.el8cp d7a74ab527fa b60d550cdc91 rgw.host43.host43.ykpwef host43 10.0.42.21:8080 running (65s) 58s ago 64s 58.9M - 16.2.10-275.el8cp d7a74ab527fa ddea7b33bfc9 rgw.host44.host44.tsepgo host44 10.0.42.22:8080 running (56s) 51s ago 55s 62.2M - 16.2.10-275.el8cp d7a74ab527fa c1e87e8744ce
rgw.host42.host42.qfeedh host42 10.0.42.20:8080 running (62s) 58s ago 62s 60.1M - 16.2.10-275.el8cp d7a74ab527fa b60d550cdc91 rgw.host43.host43.ykpwef host43 10.0.42.21:8080 running (65s) 58s ago 64s 58.9M - 16.2.10-275.el8cp d7a74ab527fa ddea7b33bfc9 rgw.host44.host44.tsepgo host44 10.0.42.22:8080 running (56s) 51s ago 55s 62.2M - 16.2.10-275.el8cp d7a74ab527fa c1e87e8744ceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the HAProxy service:
pcs resource enable haproxy-bundle
# pcs resource enable haproxy-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen BZ#2356354 is resolved, this procedure will no longer be necessary. Upgrading to Red Hat Ceph Storage 6 using the procedures in Upgrading Red Hat Ceph Storage 5 to 6 will also correct this issue.
9.4.2. Restarting the Red Hat Ceph Storage 5 Alertmanager service 링크 복사링크가 클립보드에 복사되었습니다!
After migrating from ceph-ansible to cephadm, you can restart the Alertmanager service before continuing with the process. Restarting the Alertmanager service requires restarting the HAProxy service as well.
Restarting the HAProxy service introduces downtime to the Red Hat OpenStack Platform control plane. The downtime lasts as long as is required for the HAProxy service to restart.
Procedure
Log in to the OpenStack Controller node.
NoteConfirm that you are logged into a node that is hosting the Ceph Manager service. In default deployments, this is the OpenStack Controller node. You must be logged into a Controller node that is running the Ceph Manager service.
View the current Alertmanager specification file:
sudo cephadm shell -- ceph orch ls --export alertmanager
$ sudo cephadm shell -- ceph orch ls --export alertmanagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a specification file for the Alertmanager service based on the output from the previous step.
The following is an example of a specification file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe IP addresses in the
networkslist should correspond with the Storage/Ceph public networks in your environment.-
Save the specification file as
/root/alertmanager.spec. Stop the HAProxy service:
pcs resource disable haproxy-bundle
# pcs resource disable haproxy-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the Alertmanager service:
cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -- ceph orch rm alertmanager
# cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -- ceph orch rm alertmanagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<stack>with the name of your stack.
-
Replace
Start the Alertmanager service:
cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -m /root/alertmanager.spec -- ceph orch apply -i /mnt/alertmanager.spec
# cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -m /root/alertmanager.spec -- ceph orch apply -i /mnt/alertmanager.specCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<stack>with the name of your stack.
-
Replace
Start the HAProxy service:
pcs resource enable haproxy-bundle
# pcs resource enable haproxy-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Perform this procedure again if the Alertmanager service does not restart, adding a port definition to the specification file. The following is the previous specification file example with a port definition added:
- 1
- Custom port definition. Use a port that corresponds to your deployment environment.
9.5. Implications of upgrading to Red Hat Ceph Storage 5 링크 복사링크가 클립보드에 복사되었습니다!
The Red Hat Ceph Storage cluster is now upgraded to version 5. This has the following implications:
-
You no longer use
ceph-ansibleto manage Red Hat Ceph Storage. Instead, the Ceph Orchestrator manages the Red Hat Ceph Storage cluster. For more information about the Ceph Orchestrator, see The Ceph Operations Guide. - You no longer need to perform stack updates to make changes to the Red Hat Ceph Storage cluster in most cases. Instead, you can run day two Red Hat Ceph Storage operations directly on the cluster as described in The Ceph Operations Guide. You can also scale Red Hat Ceph Storage cluster nodes up or down as described in Scaling the Ceph Storage cluster in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
- You can inspect the Red Hat Ceph Storage cluster’s health. For more information about monitoring your cluster’s health, see Monitoring Red Hat Ceph Storage nodes in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
Do not include environment files or deployment files, for example,
environments/ceph-ansible/ceph-ansible.yamlordeployment/ceph-ansible/ceph-grafana.yaml, in openstack deployment commands such asopenstack overcloud upgrade prepareandopenstack overcloud deploy. If your deployment includesceph-ansibleenvironment or deployment files, replace them with one of the following options:Expand Red Hat Ceph Storage deployment Original ceph-ansiblefileCephadmfile replacementCeph RADOS Block Device (RBD) only
environments/ceph-ansible/ceph-ansible.yamlenvironments/cephadm/cephadm-rbd-only.yamlRBD and the Ceph Object Gateway (RGW)
environments/ceph-ansible/ceph-rgw.yamlenvironments/cephadm/cephadm.yamlCeph Dashboard
environments/ceph-ansible/ceph-dashboard.yamlRespective file in
environments/cephadm/Ceph MDS
environments/ceph-ansible/ceph-mds.yamlRespective file in
environments/cephadm/