OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 10. Restoring the monitor pods in OpenShift Container Storage
Restore the monitor pods if all three of them go down, and when OpenShift Container Storage is not able to recover the monitor pods automatically.
Procedure
Scale down the
rook-ceph-operator
andocs operator
deployments.oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage
# oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment ocs-operator --replicas=0 -n openshift-storage
# oc scale deployment ocs-operator --replicas=0 -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup of all deployments in the
openshift-storage
namespace.mkdir backup
# mkdir backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd backup
# cd backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project openshift-storage
# oc project openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for d in $(oc get deployment|awk -F' ' '{print $1}'|grep -v NAME); do echo $d;oc get deployment $d -o yaml > oc_get_deployment.${d}.yaml; done
# for d in $(oc get deployment|awk -F' ' '{print $1}'|grep -v NAME); do echo $d;oc get deployment $d -o yaml > oc_get_deployment.${d}.yaml; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patch the OSD deployments to remove the
livenessProbe
parameter, and run it with the command parameter assleep
.for i in $(oc get deployment -l app=rook-ceph-osd -oname);do oc patch ${i} -n openshift-storage --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]' ; oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "osd", "command": ["sleep", "infinity"], "args": []}]}}}}' ; done
# for i in $(oc get deployment -l app=rook-ceph-osd -oname);do oc patch ${i} -n openshift-storage --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]' ; oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "osd", "command": ["sleep", "infinity"], "args": []}]}}}}' ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
monstore
cluster map from all the OSDs.Create the
recover_mon.sh
script.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
recover_mon.sh
script.chmod +x recover_mon.sh
# chmod +x recover_mon.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ./recover_mon.sh
# ./recover_mon.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Patch the MON deployments, and run it with the command parameter as
sleep
.Edit the MON deployments.
for i in $(oc get deployment -l app=rook-ceph-mon -oname);do oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "mon", "command": ["sleep", "infinity"], "args": []}]}}}}'; done
# for i in $(oc get deployment -l app=rook-ceph-mon -oname);do oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "mon", "command": ["sleep", "infinity"], "args": []}]}}}}'; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patch the MON deployments to increase the
initialDelaySeconds
.oc get deployment rook-ceph-mon-a -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
# oc get deployment rook-ceph-mon-a -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get deployment rook-ceph-mon-b -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
# oc get deployment rook-ceph-mon-b -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get deployment rook-ceph-mon-c -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
# oc get deployment rook-ceph-mon-c -o yaml | sed "s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g" | oc replace -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Copy the previously retrieved
monstore
to the mon-a pod.oc cp /tmp/monstore/ $(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\///g'):/tmp/
# oc cp /tmp/monstore/ $(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\///g'):/tmp/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate into the MON pod and change the ownership of the retrieved
monstore
.oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
# oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chown -R ceph:ceph /tmp/monstore
# chown -R ceph:ceph /tmp/monstore
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the keyring template file before rebuilding the
mon db
.oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
# oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /etc/ceph/keyring-store/keyring /tmp/keyring
# cp /etc/ceph/keyring-store/keyring /tmp/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the keyring of all the other Ceph daemons (MGR, MDS, RGW, Crash, CSI and CSI provisioners) from its respective secrets.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example keyring file,
/etc/ceph/ceph.client.admin.keyring
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important-
For
client.csi
related keyring, refer to the previous keyring file output and add the defaultcaps
after fetching the key from its respective OpenShift Container Storage secret. - OSD keyring is added automatically post recovery.
-
For
Navigate into the mon-a pod, and verify that the
monstore
hasmonmap
.Navigate into the mon-a pod.
oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
# oc rsh $(oc get po -l app=rook-ceph-mon,mon=a -oname)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
monstore
hasmonmap
.ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap
# ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap
Copy to Clipboard Copied! Toggle word wrap Toggle overflow monmaptool /tmp/monmap --print
# monmaptool /tmp/monmap --print
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: If the
monmap
is missing then create a newmonmap
.monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>
# monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <mon-a-id>
- Is the ID of the mon-a pod.
<mon-a-ip>
- Is the IP address of the mon-a pod.
<mon-b-id>
- Is the ID of the mon-b pod.
<mon-b-ip>
- Is the IP address of the mon-b pod.
<mon-c-id>
- Is the ID of the mon-c pod.
<mon-c-ip>
- Is the IP address of the mon-c pod.
<fsid>
- Is the file system ID.
Verify the
monmap
.monmaptool /root/monmap --print
# monmaptool /root/monmap --print
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the
monmap
.ImportantUse the previously created keyring file.
ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap
# ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chown -R ceph:ceph /tmp/monstore
# chown -R ceph:ceph /tmp/monstore
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup of the old
store.db
file.mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted
# mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted
# mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted
# mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the rebuild
store.db
file to themonstore
directory.mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db
# mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db
# chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After rebuilding the
monstore
directory, copy thestore.db
file from local to the rest of the MON pods.oc cp $(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db
# oc cp $(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc cp /tmp/store.db $(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\///g'):/var/lib/ceph/mon/ceph-<id>
# oc cp /tmp/store.db $(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\///g'):/var/lib/ceph/mon/ceph-<id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <id>
- Is the ID of the MON pod
Navigate into the rest of the MON pods and change the ownership of the copied
monstore
.oc rsh $(oc get po -l app=rook-ceph-mon,mon=<id> -oname)
# oc rsh $(oc get po -l app=rook-ceph-mon,mon=<id> -oname)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chown -R ceph:ceph /var/lib/ceph/mon/ceph-<id>/store.db
# chown -R ceph:ceph /var/lib/ceph/mon/ceph-<id>/store.db
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <id>
- Is the ID of the MON pod
Revert the patched changes.
For MON deployments:
oc replace --force -f <mon-deployment.yaml>
# oc replace --force -f <mon-deployment.yaml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <mon-deployment.yaml>
- Is the MON deployment yaml file
For OSD deployments:
oc replace --force -f <osd-deployment.yaml>
# oc replace --force -f <osd-deployment.yaml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <osd-deployment.yaml>
- Is the OSD deployment yaml file
For MGR deployments:
oc replace --force -f <mgr-deployment.yaml>
# oc replace --force -f <mgr-deployment.yaml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <mgr-deployment.yaml>
Is the MGR deployment yaml file
ImportantEnsure that the MON, MGR and OSD pods are up and running.
Scale up the
rook-ceph-operator
andocs-operator
deployments.oc -n openshift-storage scale deployment ocs-operator --replicas=1
# oc -n openshift-storage scale deployment ocs-operator --replicas=1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Check the Ceph status to confirm that CephFS is running.
ceph -s
# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the filesystem is offline or MDS service is missing, you need to restore the CephFS. For more information, see Section 10.1, “Restoring the CephFS”.
Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in
Ready
state.noobaa status -n openshift-storage
noobaa status -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the MCG is not in the active state, and the backingstore and bucketclass not in the
Ready
state, you need to restart all the MCG related pods. For more information, see Section 10.2, “Restoring the Multicloud Object Gateway”.
10.1. Restoring the CephFS Copier lienLien copié sur presse-papiers!
If the filesystem is offline or MDS service is missing you need to restore the CephFS.
Procedure
Scale down the
rook-ceph-operator
andocs operator
deployments.oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage
# oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale deployment ocs-operator --replicas=0 -n openshift-storage
# oc scale deployment ocs-operator --replicas=0 -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Patch the MDS deployments to remove the
livenessProbe
parameter and run it with the command parameter assleep
.for i in $(oc get deployment -l app=rook-ceph-mds -oname);do oc patch ${i} -n openshift-storage --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]' ; oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "mds", "command": ["sleep", "infinity"], "args": []}]}}}}' ; done
# for i in $(oc get deployment -l app=rook-ceph-mds -oname);do oc patch ${i} -n openshift-storage --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]' ; oc patch ${i} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "mds", "command": ["sleep", "infinity"], "args": []}]}}}}' ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Recover the CephFS.
ceph fs reset ocs-storagecluster-cephfilesystem --yes-i-really-mean-it
# ceph fs reset ocs-storagecluster-cephfilesystem --yes-i-really-mean-it
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
reset
command fails, force create the default filesystem with the data and metadata pools, and then reset it.NoteThe
reset
command might fail if thecephfilesystem
is missing.ceph fs new ocs-storagecluster-cephfilesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0 --force
# ceph fs new ocs-storagecluster-cephfilesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0 --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ceph fs reset ocs-storagecluster-cephfilesystem --yes-i-really-mean-it
# ceph fs reset ocs-storagecluster-cephfilesystem --yes-i-really-mean-it
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the MDS deployments.
oc replace --force -f oc_get_deployment.rook-ceph-mds-ocs-storagecluster-cephfilesystem-a.yaml
# oc replace --force -f oc_get_deployment.rook-ceph-mds-ocs-storagecluster-cephfilesystem-a.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc replace --force -f oc_get_deployment.rook-ceph-mds-ocs-storagecluster-cephfilesystem-b.yaml
# oc replace --force -f oc_get_deployment.rook-ceph-mds-ocs-storagecluster-cephfilesystem-b.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the
rook-ceph-operator
andocs-operator
deployments.oc scale deployment ocs-operator --replicas=1 -n openshift-storage
# oc scale deployment ocs-operator --replicas=1 -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the CephFS status.
ceph fs status
# ceph fs status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The status should be active.
If the application pods attached to the deployments which were using the CephFS Persistent Volume Claims (PVCs) get stuck in
CreateContainerError
state post restoring the CephFS, restart the application pods.oc -n <namespace> delete pods <cephfs-app-pod>
# oc -n <namespace> delete pods <cephfs-app-pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <namespace>
- Is the project namespace
<cephfs-app-pod>
- Is the name of the CephFS application pod
- If new CephFS or RBD PVCs are not getting bound, restart all the pods related to Ceph CSI.
10.2. Restoring the Multicloud Object Gateway Copier lienLien copié sur presse-papiers!
If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready
state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running.
Procedure
Restart all the pods related to the MCG.
oc delete pods <noobaa-operator> -n openshift-storage
# oc delete pods <noobaa-operator> -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pods <noobaa-core> -n openshift-storage
# oc delete pods <noobaa-core> -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pods <noobaa-endpoint> -n openshift-storage
# oc delete pods <noobaa-endpoint> -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pods <noobaa-db> -n openshift-storage
# oc delete pods <noobaa-db> -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <noobaa-operator>
- Is the name of the MCG operator
<noobaa-core>
- Is the name of the MCG core pod
<noobaa-endpoint>
- Is the name of the MCG endpoint
<noobaa-db>
- Is the name of the MCG db pod
If the RADOS Object Gateway (RGW) is configured, restart the pod.
oc delete pods <rgw-pod> -n openshift-storage
# oc delete pods <rgw-pod> -n openshift-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <rgw-pod>
- Is the name of the RGW pod