Chapter 9. Management of MDS service using the Ceph Orchestrator
As a storage administrator, you can use Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons.
This section covers the following administrative tasks:
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
9.1. Deploying the MDS service using the command line interface
Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement
specification in the command line interface. Ceph File System (CephFS) requires one or more MDS.
Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor, and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! - There are two ways of deploying MDS daemons using placement specification:
Method 1
Use
ceph fs volume
to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts.Syntax
ceph fs volume create FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
ceph fs volume create FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Copy to Clipboard Copied! NoteBy default, replicated pools are created for this command.
Example
[ceph: root@host01 /]# ceph fs volume create test --placement="2 host01 host02"
[ceph: root@host01 /]# ceph fs volume create test --placement="2 host01 host02"
Copy to Clipboard Copied!
Method 2
Create the pools, CephFS, and then deploy MDS service using placement specification:
Create the pools for CephFS:
Syntax
ceph osd pool create DATA_POOL [PG_NUM] ceph osd pool create METADATA_POOL [PG_NUM]
ceph osd pool create DATA_POOL [PG_NUM] ceph osd pool create METADATA_POOL [PG_NUM]
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph osd pool create cephfs_data 64 [ceph: root@host01 /]# ceph osd pool create cephfs_metadata 64
[ceph: root@host01 /]# ceph osd pool create cephfs_data 64 [ceph: root@host01 /]# ceph osd pool create cephfs_metadata 64
Copy to Clipboard Copied! Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system.
ImportantFor the metadata pool, consider to use:
- A higher replication level because any data loss to this pool can make the whole file system inaccessible.
- Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients.
Create the file system for the data pools and metadata pools:
Syntax
ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL
ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph fs new test cephfs_metadata cephfs_data
[ceph: root@host01 /]# ceph fs new test cephfs_metadata cephfs_data
Copy to Clipboard Copied! Deploy MDS service using the
ceph orch apply
command:Syntax
ceph orch apply mds FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
ceph orch apply mds FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch apply mds test --placement="2 host01 host02"
[ceph: root@host01 /]# ceph orch apply mds test --placement="2 host01 host02"
Copy to Clipboard Copied!
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Check the CephFS status:
Example
[ceph: root@host01 /]# ceph fs ls [ceph: root@host01 /]# ceph fs status
[ceph: root@host01 /]# ceph fs ls [ceph: root@host01 /]# ceph fs status
Copy to Clipboard Copied! List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=mds
[ceph: root@host01 /]# ceph orch ps --daemon_type=mds
Copy to Clipboard Copied!
9.2. Deploying the MDS service using the service specification
Using the Ceph Orchestrator, you can deploy the MDS service using the service specification.
Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor, and OSD daemons are deployed.
Procedure
Create the
mds.yaml
file:Example
touch mds.yaml
[root@host01 ~]# touch mds.yaml
Copy to Clipboard Copied! Edit the
mds.yaml
file to include the following details:Syntax
service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3
service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3
Copy to Clipboard Copied! Example
service_type: mds service_id: fs_name placement: hosts: - host01 - host02
service_type: mds service_id: fs_name placement: hosts: - host01 - host02
Copy to Clipboard Copied! Mount the YAML file under a directory in the container:
Example
cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml
[root@host01 ~]# cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml
Copy to Clipboard Copied! Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/mds/
[ceph: root@host01 /]# cd /var/lib/ceph/mds/
Copy to Clipboard Copied! Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Navigate to the following directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/mds/
[ceph: root@host01 /]# cd /var/lib/ceph/mds/
Copy to Clipboard Copied! Deploy MDS service using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Example
[ceph: root@host01 mds]# ceph orch apply -i mds.yaml
[ceph: root@host01 mds]# ceph orch apply -i mds.yaml
Copy to Clipboard Copied! Once the MDS services is deployed and functional, create the CephFS:
Syntax
ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL
ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph fs new test metadata_pool data_pool
[ceph: root@host01 /]# ceph fs new test metadata_pool data_pool
Copy to Clipboard Copied!
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=mds
[ceph: root@host01 /]# ceph orch ps --daemon_type=mds
Copy to Clipboard Copied!
9.3. Removing the MDS service using the Ceph Orchestrator
You can remove the service using the ceph orch rm
command. Alternatively, you can remove the file system and the associated pools.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- At least one MDS daemon deployed on the hosts.
Procedure
- There are two ways of removing MDS daemons from the cluster:
Method 1
Remove the CephFS volume, associated pools, and the services:
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Set the configuration parameter
mon_allow_pool_delete
totrue
:Example
[ceph: root@host01 /]# ceph config set mon mon_allow_pool_delete true
[ceph: root@host01 /]# ceph config set mon mon_allow_pool_delete true
Copy to Clipboard Copied! Remove the file system:
Syntax
ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it
ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph fs volume rm cephfs-new --yes-i-really-mean-it
[ceph: root@host01 /]# ceph fs volume rm cephfs-new --yes-i-really-mean-it
Copy to Clipboard Copied! This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled
ceph-mgr
Orchestrator module.
Method 2
Use the
ceph orch rm
command to remove the MDS service from the entire cluster:List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Remove the service
Syntax
ceph orch rm SERVICE_NAME
ceph orch rm SERVICE_NAME
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch rm mds.test
[ceph: root@host01 /]# ceph orch rm mds.test
Copy to Clipboard Copied!
Verification
List the hosts, daemons, and processes:
Syntax
ceph orch ps
ceph orch ps
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph orch ps
Copy to Clipboard Copied!