Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Using odf-cli command
odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal.
3.1. Subcommands of odf get command Copiar enlaceEnlace copiado en el portapapeles!
- odf get recovery-profile
Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the
odf set recovery-profilecommand. After the value is set, the appropriate value is displayed.Example:
$ odf get recovery-profile
# high_recovery_ops
- odf get health
Checks the health of the Ceph cluster and common configuration issues. This command checks for the following:
- At least three mon pods are running on different nodes
- Mon quorum and Ceph health details
- At least three OSD pods are running on different nodes
- The 'Running' status of all pods
- Placement group status
At least one MGR pod is running
Example:
$ odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]
- odf get dr-health
In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The
cephblockpoolis queried with mirroring-enabled and If not found will exit with relevant logs.Example:
$ odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found "ocs-storagecluster-cephblockpool" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]- odf get dr-prereq
Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown.
Example
$ odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints $ odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.
3.2. Subcommands of odf operator command Copiar enlaceEnlace copiado en el portapapeles!
- odf operator rook set
Sets the provided property value in the rook-ceph-operator config
configmapExample:
$ odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patchedwhere,
ROOK_LOG_LEVELcan beDEBUG,INFO, orWARNING- odf operator rook restart
Restarts the Rook-Ceph operator
Example:
$ odf operator rook restart deployment.apps/rook-ceph-operator restarted- odf restore mon-quorum
Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again.
Example:
$ odf restore mon-quorum c- odf restore deleted <crd>
Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime.
NoteA warning message seeking confirmation to restore appears. After confirming, you need to enter
continueto start the operator and expand to the full mon-quorum again.Example:
$ odf restore deleted cephclusters
Info: Detecting which resources to restore for crd "cephclusters"
Info: Restoring CR my-cluster
Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no
[...]
3.3. Configuring debug verbosity of Ceph components Copiar enlaceEnlace copiado en el portapapeles!
You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values.
Procedure
Set log level for Ceph daemons:
$ odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>where
ceph-subsystemcan beosd,mds,mon, ormgr.For example,
$ odf set ceph log-level osd crush 20$ odf set ceph log-level mds crush 20$ odf set ceph log-level mon crush 20$ odf set ceph log-level mgr crush 20