5.4. Creating Distributed Volumes


This type of volume spreads files across the bricks in the volume.
Illustration of a distributed volume consisting of two servers. Two files are shown on the server1 brick, and one file is shown on the server2 brick. The distributed volume is set to a single mount point.

Figure 5.1. Illustration of a Distributed Volume

Warning

Distributed volumes can suffer significant data loss during a disk or server failure because directory contents are spread randomly across the bricks in the volume and hence require an architecture review before using them in production.
Please reach out to your Red Hat account team to arrange an architecture review should you intend to use distributed only volumes.
Use distributed volumes only where redundancy is either not important, or is provided by other hardware or software layers. In other cases, use one of the volume types that provide redundancy, for example distributed-replicated volumes.
Limitations of distributed only volumes include:
  1. No in-service upgrades - distributed only volumes need to be taken offline during upgrades.
  2. Temporary inconsistencies of directory entries and inodes during eventual node failures.
  3. I/O operations will block or fail due to node unavailability or eventual node failures.
  4. Permanent loss of data.

Create a Distributed Volume

Use gluster volume create command to create different types of volumes, and gluster volume info command to verify successful volume creation.

Prerequisites

  1. Run the gluster volume create command to create the distributed volume.
    The syntax is gluster volume create NEW-VOLNAME [transport tcp | rdma (Deprecated) | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 11.1, “Configuring Volume Options” for a full list of parameters.
    Red Hat recommends disabling the performance.client-io-threads option on distributed volumes, as this option tends to worsen performance. Run the following command to disable performance.client-io-threads:
    # gluster volume set VOLNAME performance.client-io-threads off

    Example 5.1. Distributed Volume with Two Storage Servers

    # gluster v create glustervol server1:/rhgs/brick1 server2:/rhgs/brick1
    volume create: glutervol: success: please start the volume to access data

    Example 5.2. Distributed Volume over InfiniBand with Four Servers

    # gluster v create glustervol transport rdma server1:/rhgs/brick1 server2:/rhgs/brick1 server3:/rhgs/brick1 server4:/rhgs/brick1
    volume create: glutervol: success: please start the volume to access data
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster v start glustervol
    volume start: glustervol: success
  3. Run gluster volume info command to optionally display the volume information.
    # gluster volume info
    Volume Name: test-volume
    Type: Distribute
    Status: Created
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: server1:/rhgs/brick
    Brick2: server2:/rhgs/brick
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.