Chapter 6. Upgrading an overcloud with director-deployed Ceph deployments
If your environment includes director-deployed Red Hat Ceph Storage deployments with or without hyperconverged infrastructure (HCI) nodes, you must upgrade your deployments to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm
now manages Red Hat Ceph Storage instead of ceph-ansible
.
If you are using the Red Hat Ceph Storage Object Gateway (RGW), ensure that all RGW pools have the application label rgw
as described in Why are the RGW services crashing after running the cephadm adoption playbook?.
Implementing this configuration change addresses a common issue encountered when upgrading from Red Hat Ceph Storage Release 4 to 5.
6.1. Installing ceph-ansible
If you deployed Red Hat Ceph Storage using director, you must complete this procedure. The ceph-ansible
package is required to upgrade Red Hat Ceph Storage with Red Hat OpenStack Platform.
Procedure
Enable the Ceph 5 Tools repository:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
[stack@director ~]$ sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
Install the
ceph-ansible
package:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo dnf install -y ceph-ansible
[stack@director ~]$ sudo dnf install -y ceph-ansible
6.2. Downloading Red Hat Ceph Storage containers to the undercloud from Satellite
If the Red Hat Ceph Storage container image is hosted on a Red Hat Satellite Server, then you must download a copy of the image to the undercloud before starting the Red Hat Ceph Storage upgrade using Red Hat Satellite.
Prerequisite
- The required Red Hat Ceph Storage container image is hosted on the Satellite Server.
Procedure
-
Log in to the undercloud node as the
stack
user. Download the Red Hat Ceph Storage container image from the Satellite Server:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman pull <ceph_image_file>
$ sudo podman pull <ceph_image_file>
Replace
<ceph_image_file>
with the Red Hat Ceph Storage container image file hosted on the Satellite Server. The following is an example of this command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman pull satellite.example.com/container-images-osp-17_1-rhceph-5-rhel8:latest
$ sudo podman pull satellite.example.com/container-images-osp-17_1-rhceph-5-rhel8:latest
6.3. Upgrading to Red Hat Ceph Storage 5
Upgrade the following nodes from Red Hat Ceph Storage version 4 to version 5:
- Red Hat Ceph Storage nodes
- Hyperconverged infrastructure (HCI) nodes, which contain combined Compute and Ceph OSD services
For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact.
Red Hat Ceph Storage 5 uses Prometheus v4.10, which has the following known issue: If you enable Red Hat Ceph Storage dashboard, two data sources are configured on the dashboard. For more information about this known issue, see BZ#2054852.
Red Hat Ceph Storage 6 uses Prometheus v4.12, which does not include this known issue. Red Hat recommends upgrading from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6 after the upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1 is complete. To upgrade from Red Hat Ceph Storage version 5 to version 6, begin with one of the following procedures for your environment:
-
Director-deployed Red Hat Ceph Storage environments: Updating the
cephadm
client - External Red Hat Ceph Storage cluster environments: Updating the Red Hat Ceph Storage container image
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/stackrc
$ source ~/stackrc
Run the Red Hat Ceph Storage external upgrade process with the
ceph
tag:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud external-upgrade run \ --skip-tags "ceph_ansible_remote_tmp" \ --stack <stack> \ --tags ceph,facts 2>&1
$ openstack overcloud external-upgrade run \ --skip-tags "ceph_ansible_remote_tmp" \ --stack <stack> \ --tags ceph,facts 2>&1
-
Replace
<stack>
with the name of your stack. -
If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansible
to the provided comma-separated list of values for the--skip-tags
parameter.
-
Replace
Run the
ceph versions
command to confirm all Red Hat Ceph Storage daemons have been upgraded to version 5. This command is available in theceph monitor
container that is hosted by default on the Controller node.ImportantThe command in the previous step runs the
ceph-ansible
rolling_update.yaml
playbook to update the cluster from version 4 to 5. It is important to confirm all daemons have been updated before proceeding with this procedure.The following example demonstrates the use and output of this command. As demonstrated in the example, all daemons in your deployment should show a package version of
16.2.*
and the keywordpacific
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman exec ceph-mon-$(hostname -f) ceph versions
$ sudo podman exec ceph-mon-$(hostname -f) ceph versions { "mon": { "ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)": 3 }, "osd": { "ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)": 180 }, "mds": {}, "rgw": { "ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)": 3 }, "overall": { "ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)": 189 } }
NoteThe output of the command
sudo podman ps | grep ceph
on any server hosting Red Hat Ceph Storage should return a version 5 container.Create the
ceph-admin
user and distribute the appropriate keyrings:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ANSIBLE_LOG_PATH=/home/stack/cephadm_enable_user_key.log \ ANSIBLE_HOST_KEY_CHECKING=false \ ansible-playbook -i /home/stack/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml \ -b -e ansible_python_interpreter=/usr/libexec/platform-python /usr/share/ansible/tripleo-playbooks/ceph-admin-user-playbook.yml \ -e tripleo_admin_user=ceph-admin \ -e distribute_private_key=true \ --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd
ANSIBLE_LOG_PATH=/home/stack/cephadm_enable_user_key.log \ ANSIBLE_HOST_KEY_CHECKING=false \ ansible-playbook -i /home/stack/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml \ -b -e ansible_python_interpreter=/usr/libexec/platform-python /usr/share/ansible/tripleo-playbooks/ceph-admin-user-playbook.yml \ -e tripleo_admin_user=ceph-admin \ -e distribute_private_key=true \ --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd
Update the packages on the Red Hat Ceph Storage nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud upgrade run \ --stack <stack> \ --skip-tags ceph_ansible_remote_tmp \ --tags setup_packages --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd \ --playbook /home/stack/overcloud-deploy/<stack>/config-download/<stack>/upgrade_steps_playbook.yaml 2>&1
$ openstack overcloud upgrade run \ --stack <stack> \ --skip-tags ceph_ansible_remote_tmp \ --tags setup_packages --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd \ --playbook /home/stack/overcloud-deploy/<stack>/config-download/<stack>/upgrade_steps_playbook.yaml 2>&1
If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansible
to the provided comma-separated list of values for the--skip-tags
parameter.NoteBy default, the Ceph Monitor service (CephMon) runs on the Controller nodes unless you have used the composable roles feature to host them elsewhere. This command includes the
ceph_mon
tag, which also updates the packages on the nodes hosting the Ceph Monitor service (the Controller nodes by default).
Configure the Red Hat Ceph Storage nodes to use
cephadm
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud external-upgrade run \ --skip-tags ceph_ansible_remote_tmp \ --stack <stack> \ --tags cephadm_adopt 2>&1
$ openstack overcloud external-upgrade run \ --skip-tags ceph_ansible_remote_tmp \ --stack <stack> \ --tags cephadm_adopt 2>&1
If you are running this command at a DCN deployed site, add the value skip-tag
cleanup_cephansible
to the provided comma-separated list of values for the--skip-tags
parameter.NoteThe adoption of
cephadm
can cause downtime in the RGW and Alertmanager services. For more information about these issues, see Restarting Red Hat Ceph Storage 5 services.
Run the
ceph -s
command to confirm all processes are now managed by Red Hat Ceph Storage orchestrator. This command is available in theceph monitor
container that is hosted by default on the Controller node.ImportantThe command in the previous step runs the
ceph-ansible
cephadm-adopt.yaml
playbook to move future management of the cluster fromceph-ansible
tocephadm
and the Red Hat Ceph Storage orchestrator. It is important to confirm all processes are now managed by the orcestrator before proceeding with this procedure.The following example demonstrates the use and output of this command. As demonstrated in this example, there are 63 daemons that are not managed by
cephadm
. This indicates there was a problem with the running of theceph-ansible
cephadm-adopt.yml
playbook. Contact Red Hat Ceph Storage support to troubleshoot these errors before proceeding with the upgrade. When the adoption process has been completed successfully, there should not be any warning about stray daemons not managed bycephadm
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph -s
$ sudo cephadm shell -- ceph -s cluster: id: f5a40da5-6d88-4315-9bb3-6b16df51d765 health: HEALTH_WARN 63 stray daemon(s) not managed by cephadm
Modify the
overcloud_upgrade_prepare.sh
file to replace theceph-ansible
file with acephadm
heat environment file:ImportantDo not include
ceph-ansible
environment or deployment files, for example,environments/ceph-ansible/ceph-ansible.yaml
ordeployment/ceph-ansible/ceph-grafana.yaml
, in openstack deployment commands such asopenstack overcloud upgrade prepare
andopenstack overcloud deploy
. For more information about replacingceph-ansible
environment and deployment files withcephadm
files, see Implications of upgrading to Red Hat Ceph Storage 5.Copy to Clipboard Copied! Toggle word wrap Toggle overflow #!/bin/bash openstack overcloud upgrade prepare --yes \ --timeout 460 \ --templates /usr/share/openstack-tripleo-heat-templates \ --ntp-server 192.168.24.1 \ --stack <stack> \ -r /home/stack/roles_data.yaml \ -e /home/stack/templates/internal.yaml \ … -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml \ -e ~/containers-prepare-parameter.yaml
#!/bin/bash openstack overcloud upgrade prepare --yes \ --timeout 460 \ --templates /usr/share/openstack-tripleo-heat-templates \ --ntp-server 192.168.24.1 \ --stack <stack> \ -r /home/stack/roles_data.yaml \ -e /home/stack/templates/internal.yaml \ … -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml \ -e ~/containers-prepare-parameter.yaml
NoteThis example uses the
environments/cephadm/cephadm-rbd-only.yaml
file because RGW is not deployed. If you plan to deploy RGW, useenvironments/cephadm/cephadm.yaml
after you finish upgrading your RHOSP environment, and then run a stack update.Modify the
overcloud_upgrade_prepare.sh
file to remove the following environment file if you added it earlier when you ran the overcloud upgrade preparation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml
- Save the file.
Run the upgrade preparation command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source stackrc chmod 755 /home/stack/overcloud_upgrade_prepare.sh
$ source stackrc $ chmod 755 /home/stack/overcloud_upgrade_prepare.sh sh /home/stack/overcloud_upgrade_prepare.sh
If your deployment includes HCI nodes, create a temporary
hci.conf
file in acephadm
container of a Controller node:Log in to a Controller node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh cloud-admin@<controller_ip>
$ ssh cloud-admin@<controller_ip>
-
Replace
<controller_ip>
with the IP address of the Controller node.
-
Replace
Retrieve a
cephadm
shell from the Controller node:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell
[cloud-admin@controller-0 ~]$ sudo cephadm shell
In the
cephadm
shell, create a temporaryhci.conf
file:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [ceph: root@edpm-controller-0 /]# cat <<EOF > hci.conf [osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 EOF …
[ceph: root@edpm-controller-0 /]# cat <<EOF > hci.conf [osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 EOF …
Apply the configuration:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [ceph: root@edpm-controller-0 /]# ceph config assimilate-conf -i hci.conf
[ceph: root@edpm-controller-0 /]# ceph config assimilate-conf -i hci.conf
For more information about adjusting the configuration of your HCI deployment, see Ceph configuration overrides for HCI in Deploying a hyperconverged infrastructure.
You must upgrade the operating system on all HCI nodes to RHEL 9. For more information on upgrading Compute and HCI nodes, see Upgrading Compute nodes to RHEL 9.2.
If Red Hat Ceph Storage Rados Gateway (RGW) is used for object storage, complete the steps in Ceph config overrides set for the RGWs on the RHCS 4.x does not get reflected after the Upgrade to RHCS 5.x to ensure your Red Hat Ceph Storage 4 configuration is reflected completely in Red Hat Ceph Storage 5.
If the Red Hat Ceph Storage Dashboard is installed, complete the steps in After FFU 16.2 to 17.1, Ceph Grafana dashboard failed to start due to incorrect dashboard configuration to ensure it is properly configured.
6.4. Restarting Red Hat Ceph Storage 5 services
After the adoption from ceph-ansible
to cephadm
, the Alertmanager service (a component of the dashboard) or RGW might go offline. This is due to the following issues related to cephadm
adoption in Red Hat Ceph Storage 5:
You can restart these services immediately by following the procedures in this section but doing so requires restarting the HAProxy service. Restarting the HAProxy service causes a brief service interruption of the Red Hat OpenStack Platform (RHOSP) control plane.
Do not perform the procedures in this section if any of the following statements are true:
- You do not deploy Red Hat Ceph Storage Dashboard or RGW.
- You do not immediately require the Alertmanager and RGW services.
- You do not want the control plane downtime caused by restarting HAProxy.
- You plan on upgrading to Red Hat Ceph Storage 6 before ending the maintenance window for the upgrade.
If any of these statements are true, proceed with the upgrade process as described in subsequent chapters and upgrade to Red Hat Ceph Storage 6 when you reach the section Upgrading Red Hat Ceph Storage 5 to 6. Complete all intervening steps in the upgrade process before attempting to upgrade to Release 6.
6.4.1. Restarting the Red Hat Ceph Storage 5 Object Gateway
After migrating from ceph-ansible
to cephadm
, you might have to restart the Red Hat Ceph Storage Object Gateway (RGW) before continuing with the process if it has not already restarted. RGW starts automatically when HAProxy is offline. Once RGW is online, HAproxy can be started. This is because RGW currently checks if ports are open for all IPs instead of only the IPs in use. When BZ#2356354 is resolved, RGW will only check the ports for the IPs in use.
Restarting the HAProxy service introduces downtime to the Red Hat OpenStack Platform control plane. The downtime lasts as long as is required for the HAProxy service to restart.
Procedure
Log in to the OpenStack Controller node.
NoteConfirm that you are logged into a node that is hosting the Ceph Manager service. In default deployments, this is the OpenStack Controller node. You must be logged into a Controller node that is running the Ceph Manager service.
Determine the current health of the Red Hat Ceph Storage 5 deployment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph health detail
# sudo cephadm shell -- ceph health detail
Observe the command output.
If the output displays no issues with the deployment health, you do not have to complete this procedure. If the following error is displayed in the command output, you must proceed with restarting HAProxy to restart RGW:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow HEALTH_WARN Failed to place 1 daemon(s); 3 failed cephadm daemon(s) [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s) Failed while placing rgw.host42.host42.foo on host42: cephadm exited with an error code: 1, stderr:Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw-host42-host42-foo /bin/podman: stderr Error: error inspecting object: no such container ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw-host42-host42-foo Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw.host42.host42.foo /bin/podman: stderr Error: error inspecting object: no such container ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw.host42.host42.foo Deploy daemon rgw.host42.host42.foo ... Verifying port 8080 ... Cannot bind to IP 0.0.0.0 port 8080: [Errno 98] Address already in use ERROR: TCP Port(s) '8080' required for rgw already in use
HEALTH_WARN Failed to place 1 daemon(s); 3 failed cephadm daemon(s) [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s) Failed while placing rgw.host42.host42.foo on host42: cephadm exited with an error code: 1, stderr:Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw-host42-host42-foo /bin/podman: stderr Error: error inspecting object: no such container ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw-host42-host42-foo Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw.host42.host42.foo /bin/podman: stderr Error: error inspecting object: no such container ceph-5ffc7906-2722-4602-9478-e2fe6ad3ff49-rgw.host42.host42.foo Deploy daemon rgw.host42.host42.foo ... Verifying port 8080 ... Cannot bind to IP 0.0.0.0 port 8080: [Errno 98] Address already in use ERROR: TCP Port(s) '8080' required for rgw already in use
Stop the HAProxy service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource disable haproxy-bundle
# pcs resource disable haproxy-bundle
NoteRGW should now restart automatically.
Confirm that RGW restarted:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph orch ps
# sudo cephadm shell -- ceph orch ps
Observe the command output.
The following is an example of the command output confirming that all services are running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rgw.host42.host42.qfeedh host42 10.0.42.20:8080 running (62s) 58s ago 62s 60.1M - 16.2.10-275.el8cp d7a74ab527fa b60d550cdc91 rgw.host43.host43.ykpwef host43 10.0.42.21:8080 running (65s) 58s ago 64s 58.9M - 16.2.10-275.el8cp d7a74ab527fa ddea7b33bfc9 rgw.host44.host44.tsepgo host44 10.0.42.22:8080 running (56s) 51s ago 55s 62.2M - 16.2.10-275.el8cp d7a74ab527fa c1e87e8744ce
rgw.host42.host42.qfeedh host42 10.0.42.20:8080 running (62s) 58s ago 62s 60.1M - 16.2.10-275.el8cp d7a74ab527fa b60d550cdc91 rgw.host43.host43.ykpwef host43 10.0.42.21:8080 running (65s) 58s ago 64s 58.9M - 16.2.10-275.el8cp d7a74ab527fa ddea7b33bfc9 rgw.host44.host44.tsepgo host44 10.0.42.22:8080 running (56s) 51s ago 55s 62.2M - 16.2.10-275.el8cp d7a74ab527fa c1e87e8744ce
Start the HAProxy service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource enable haproxy-bundle
# pcs resource enable haproxy-bundle
NoteWhen BZ#2356354 is resolved, this procedure will no longer be necessary. Upgrading to Red Hat Ceph Storage 6 using the procedures in Upgrading Red Hat Ceph Storage 5 to 6 will also correct this issue.
6.4.2. Restarting the Red Hat Ceph Storage 5 Alertmanager service
After migrating from ceph-ansible
to cephadm
, you can restart the Alertmanager service before continuing with the process. Restarting the Alertmanager service requires restarting the HAProxy service as well.
Restarting the HAProxy service introduces downtime to the Red Hat OpenStack Platform control plane. The downtime lasts as long as is required for the HAProxy service to restart.
Procedure
Log in to the OpenStack Controller node.
NoteConfirm that you are logged into a node that is hosting the Ceph Manager service. In default deployments, this is the OpenStack Controller node. You must be logged into a Controller node that is running the Ceph Manager service.
View the current Alertmanager specification file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph orch ls --export alertmanager
$ sudo cephadm shell -- ceph orch ls --export alertmanager
Create a specification file for the Alertmanager service based on the output from the previous step.
The following is an example of a specification file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service_type: alertmanager service_name: alertmanager placement: count: 3 label: monitoring networks: - 10.10.10.0/24 - 10.10.11.0/24
service_type: alertmanager service_name: alertmanager placement: count: 3 label: monitoring networks: - 10.10.10.0/24 - 10.10.11.0/24
NoteThe IP addresses in the
networks
list should correspond with the Storage/Ceph public networks in your environment.-
Save the specification file as
/root/alertmanager.spec
. Stop the HAProxy service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource disable haproxy-bundle
# pcs resource disable haproxy-bundle
Stop the Alertmanager service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -- ceph orch rm alertmanager
# cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -- ceph orch rm alertmanager
-
Replace
<stack>
with the name of your stack.
-
Replace
Start the Alertmanager service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -m /root/alertmanager.spec -- ceph orch apply -i /mnt/alertmanager.spec
# cephadm shell -k /etc/ceph/<stack>.client.admin.keyring -m /root/alertmanager.spec -- ceph orch apply -i /mnt/alertmanager.spec
-
Replace
<stack>
with the name of your stack.
-
Replace
Start the HAProxy service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource enable haproxy-bundle
# pcs resource enable haproxy-bundle
Perform this procedure again if the Alertmanager service does not restart, adding a port definition to the specification file. The following is the previous specification file example with a port definition added:
service_type: alertmanager service_name: alertmanager placement: count: 3 label: monitoring networks: - 10.10.10.0/24 - 10.10.11.0/24 spec: port: 4200
service_type: alertmanager
service_name: alertmanager
placement:
count: 3
label: monitoring
networks:
- 10.10.10.0/24
- 10.10.11.0/24
spec:
port: 4200
- 1
- Custom port definition. Use a port that corresponds to your deployment environment.
6.5. Implications of upgrading to Red Hat Ceph Storage 5
The Red Hat Ceph Storage cluster is now upgraded to version 5. This has the following implications:
-
You no longer use
ceph-ansible
to manage Red Hat Ceph Storage. Instead, the Ceph Orchestrator manages the Red Hat Ceph Storage cluster. For more information about the Ceph Orchestrator, see The Ceph Operations Guide. - You no longer need to perform stack updates to make changes to the Red Hat Ceph Storage cluster in most cases. Instead, you can run day two Red Hat Ceph Storage operations directly on the cluster as described in The Ceph Operations Guide. You can also scale Red Hat Ceph Storage cluster nodes up or down as described in Scaling the Ceph Storage cluster in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
- You can inspect the Red Hat Ceph Storage cluster’s health. For more information about monitoring your cluster’s health, see Monitoring Red Hat Ceph Storage nodes in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
Do not include environment files or deployment files, for example,
environments/ceph-ansible/ceph-ansible.yaml
ordeployment/ceph-ansible/ceph-grafana.yaml
, in openstack deployment commands such asopenstack overcloud upgrade prepare
andopenstack overcloud deploy
. If your deployment includesceph-ansible
environment or deployment files, replace them with one of the following options:Red Hat Ceph Storage deployment Original ceph-ansible
fileCephadm
file replacementCeph RADOS Block Device (RBD) only
environments/ceph-ansible/ceph-ansible.yaml
environments/cephadm/cephadm-rbd-only.yaml
RBD and the Ceph Object Gateway (RGW)
environments/ceph-ansible/ceph-rgw.yaml
environments/cephadm/cephadm.yaml
Ceph Dashboard
environments/ceph-ansible/ceph-dashboard.yaml
Respective file in
environments/cephadm/
Ceph MDS
environments/ceph-ansible/ceph-mds.yaml
Respective file in
environments/cephadm/