Chapter 9. Management of MDS service using the Ceph Orchestrator


As a storage administrator, you can use Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons.

This section covers the following administrative tasks:

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.

9.1. Deploying the MDS service using the command line interface

Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement specification in the command line interface. Ceph File System (CephFS) requires one or more MDS.

Note

Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor, and OSD daemons are deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard

  2. There are two ways of deploying MDS daemons using placement specification:

Method 1

  • Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts.

    Syntax

    ceph fs volume create FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
    Copy to Clipboard

    Note

    By default, replicated pools are created for this command.

    Example

    [ceph: root@host01 /]# ceph fs volume create test --placement="2 host01 host02"
    Copy to Clipboard

Method 2

  • Create the pools, CephFS, and then deploy MDS service using placement specification:

    1. Create the pools for CephFS:

      Syntax

      ceph osd pool create DATA_POOL [PG_NUM]
      ceph osd pool create METADATA_POOL [PG_NUM]
      Copy to Clipboard

      Example

      [ceph: root@host01 /]# ceph osd pool create cephfs_data 64
      [ceph: root@host01 /]# ceph osd pool create cephfs_metadata 64
      Copy to Clipboard

      Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system.

      Important

      For the metadata pool, consider to use:

      • A higher replication level because any data loss to this pool can make the whole file system inaccessible.
      • Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients.
    2. Create the file system for the data pools and metadata pools:

      Syntax

      ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL
      Copy to Clipboard

      Example

      [ceph: root@host01 /]# ceph fs new test cephfs_metadata cephfs_data
      Copy to Clipboard

    3. Deploy MDS service using the ceph orch apply command:

      Syntax

      ceph orch apply mds FILESYSTEM_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
      Copy to Clipboard

      Example

      [ceph: root@host01 /]# ceph orch apply mds test --placement="2 host01 host02"
      Copy to Clipboard

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls
    Copy to Clipboard

  • Check the CephFS status:

    Example

    [ceph: root@host01 /]# ceph fs ls
    [ceph: root@host01 /]# ceph fs status
    Copy to Clipboard

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME
    Copy to Clipboard

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=mds
    Copy to Clipboard

9.2. Deploying the MDS service using the service specification

Using the Ceph Orchestrator, you can deploy the MDS service using the service specification.

Note

Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.
  • All manager, monitor, and OSD daemons are deployed.

Procedure

  1. Create the mds.yaml file:

    Example

    [root@host01 ~]# touch mds.yaml
    Copy to Clipboard

  2. Edit the mds.yaml file to include the following details:

    Syntax

    service_type: mds
    service_id: FILESYSTEM_NAME
    placement:
      hosts:
      - HOST_NAME_1
      - HOST_NAME_2
      - HOST_NAME_3
    Copy to Clipboard

    Example

    service_type: mds
    service_id: fs_name
    placement:
      hosts:
      - host01
      - host02
    Copy to Clipboard

  3. Mount the YAML file under a directory in the container:

    Example

    [root@host01 ~]# cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml
    Copy to Clipboard

  4. Navigate to the directory:

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/mds/
    Copy to Clipboard

  5. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard

  6. Navigate to the following directory:

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/mds/
    Copy to Clipboard

  7. Deploy MDS service using service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yaml
    Copy to Clipboard

    Example

    [ceph: root@host01 mds]# ceph orch apply -i mds.yaml
    Copy to Clipboard

  8. Once the MDS services is deployed and functional, create the CephFS:

    Syntax

    ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL
    Copy to Clipboard

    Example

    [ceph: root@host01 /]# ceph fs new test metadata_pool data_pool
    Copy to Clipboard

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls
    Copy to Clipboard

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME
    Copy to Clipboard

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=mds
    Copy to Clipboard

9.3. Removing the MDS service using the Ceph Orchestrator

You can remove the service using the ceph orch rm command. Alternatively, you can remove the file system and the associated pools.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • At least one MDS daemon deployed on the hosts.

Procedure

  • There are two ways of removing MDS daemons from the cluster:

Method 1

  • Remove the CephFS volume, associated pools, and the services:

    1. Log into the Cephadm shell:

      Example

      [root@host01 ~]# cephadm shell
      Copy to Clipboard

    2. Set the configuration parameter mon_allow_pool_delete to true:

      Example

      [ceph: root@host01 /]# ceph config set mon mon_allow_pool_delete true
      Copy to Clipboard

    3. Remove the file system:

      Syntax

      ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it
      Copy to Clipboard

      Example

      [ceph: root@host01 /]# ceph fs volume rm cephfs-new --yes-i-really-mean-it
      Copy to Clipboard

      This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled ceph-mgr Orchestrator module.

Method 2

  • Use the ceph orch rm command to remove the MDS service from the entire cluster:

    1. List the service:

      Example

      [ceph: root@host01 /]# ceph orch ls
      Copy to Clipboard

    2. Remove the service

      Syntax

      ceph orch rm SERVICE_NAME
      Copy to Clipboard

      Example

      [ceph: root@host01 /]# ceph orch rm mds.test
      Copy to Clipboard

Verification

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps
    Copy to Clipboard

    Example

    [ceph: root@host01 /]# ceph orch ps
    Copy to Clipboard

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat