이 콘텐츠는 선택한 언어로 제공되지 않습니다.

12.2. Snapshot Commands


The various commands that are available with the snapshot feature are described in the following section:
  • Creating Snapshot

    Before creating a snapshot ensure that the following prerequisites are met:

    • Red Hat Storage volume has to be present and the volume has to be in the Started state.
    • All the bricks of the volume have to be on an independent thin logical volume(LV).
    • Snapshot names must be unique in the cluster.
    • All the bricks of the volume should be up and running, unless it is a n-way replication where n >= 3. In such case quorum must be met. For more information see Chapter 12, Managing Snapshots
    • No other volume operation, like rebalance, add-brick, etc, should be running on the volume.
    • Total number of snapshots in the volume should not be equal to Effective snap-max-hard-limit. For more information see Configuring Snapshot Behavior.
    • If you have a geo-replication setup, then pause the geo-replication session if it is running, by executing the following command:
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL pause
      Copy to Clipboard Toggle word wrap
      For example,
      # gluster volume geo-replication master-vol example.com::slave-vol pause	
      Pausing geo-replication session between master-vol example.com::slave-vol has been successful
      Copy to Clipboard Toggle word wrap
      Ensure that you take the snapshot of the master volume and then take snapshot of the slave volume.
    • If you have a Hadoop enabled Red Hat Storage volume, you must ensure to stop all the Hadoop Services in Ambari.
    To create a snapshot of the volume, run the following command:
    # gluster snapshot create <snapname> VOLNAME(S) [description <description>] [force]
    Copy to Clipboard Toggle word wrap
    where,
    • snapname - Name of the snapshot that will be created. It should be a unique name in the entire cluster.
    • VOLNAME(S) - Name of the volume for which the snapshot will be created. We only support creating snapshot of single volume.
    • description - This is an optional field that can be used to provide a description of the snap that will be saved along with the snap.
    • force - Snapshot creation will fail if any brick is down. In a n-way replicated Red Hat Storage volume where n >= 3 snapshot is allowed even if some of the bricks are down. In such case quorum is checked. Quorum is checked only when the force option is provided, else by-default the snapshot create will fail if any brick is down. Refer the Overview section for more details on quorum.
    For Example:
    # gluster snapshot create snap1 vol1
    snapshot create: success: Snap snap1 created successfully
    Copy to Clipboard Toggle word wrap
    Snapshot of a Red Hat Storage volume creates a read-only Red Hat Storage volume. This volume will have identical configuration as of the original / parent volume. Bricks of this newly created snapshot is mounted as /var/run/gluster/snaps/<snap-volume-name>/brick<bricknumber>.
    For example, a snapshot with snap volume name 0888649a92ea45db8c00a615dfc5ea35 and having two bricks will have the following two mount points:
    /var/run/gluster/snaps/0888649a92ea45db8c00a615dfc5ea35/brick1
    /var/run/gluster/snaps/0888649a92ea45db8c00a615dfc5ea35/brick2
    Copy to Clipboard Toggle word wrap
    These mounts can also be viewed using the df or mount command.

    Note

    If you have a geo-replication setup, after creating the snapshot, resume the geo-replication session by running the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
    Copy to Clipboard Toggle word wrap
    For example,
    # gluster volume geo-replication master-vol example.com::slave-vol resume
    Resuming geo-replication session between master-vol example.com::slave-vol has been successful
    Copy to Clipboard Toggle word wrap
  • Listing of Available Snapshots

    To list all the snapshots that are taken for a specific volume, run the following command:

    # gluster snapshot list [VOLNAME]
    Copy to Clipboard Toggle word wrap
    where,
    • VOLNAME - This is an optional field and if provided lists the snapshot names of all snapshots present in the volume.
    For Example:
    # gluster snapshot list
    snap3
    # gluster snapshot list test_vol
    No snapshots present
    Copy to Clipboard Toggle word wrap
  • Getting Information of all the Available Snapshots

    The following command provides the basic information of all the snapshots taken. By default the information of all the snapshots in the cluster is displayed:

    # gluster snapshot info [(<snapname> | volume VOLNAME)]
    Copy to Clipboard Toggle word wrap
    where,
    • snapname - This is an optional field. If the snapname is provided then the information about the specified snap is displayed.
    • VOLNAME - This is an optional field. If the VOLNAME is provided the information about all the snaps in the specified volume is displayed.
    For Example:
    # gluster snapshot info snap3
    Snapshot                  : snap3
    Snap UUID                 : b2a391ce-f511-478f-83b7-1f6ae80612c8
    Created                   : 2014-06-13 09:40:57
    Snap Volumes:
    
         Snap Volume Name          : e4a8f4b70a0b44e6a8bff5da7df48a4d
         Origin Volume name        : test_vol1
         Snaps taken for test_vol1      : 1
         Snaps available for test_vol1  : 255
         Status                    : Started
    Copy to Clipboard Toggle word wrap
  • Getting the Status of Available Snapshots

    This command displays the running status of the snapshot. By default the status of all the snapshots in the cluster is displayed. To check the status of all the snapshots that are taken for a particular volume, specify a volume name:

    # gluster snapshot status [(<snapname> | volume VOLNAME)]
    Copy to Clipboard Toggle word wrap
    where,
    • snapname - This is an optional field. If the snapname is provided then the status about the specified snap is displayed.
    • VOLNAME - This is an optional field. If the VOLNAME is provided the status about all the snaps in the specified volume is displayed.
    For Example:
    # gluster snapshot status snap3
    
    Snap Name : snap3
    Snap UUID : b2a391ce-f511-478f-83b7-1f6ae80612c8
    
         Brick Path        :
    10.70.42.248:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick1/brick1
         Volume Group      :   snap_lvgrp1
         Brick Running     :   Yes
         Brick PID         :   1640
         Data Percentage   :   1.54
         LV Size           :   616.00m
    
    
         Brick Path        :
    10.70.43.139:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick2/brick3
         Volume Group      :   snap_lvgrp1
         Brick Running     :   Yes
         Brick PID         :   3900
         Data Percentage   :   1.80
         LV Size           :   616.00m
    
    
         Brick Path        :
    10.70.43.34:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick3/brick4
         Volume Group      :   snap_lvgrp1
         Brick Running     :   Yes
         Brick PID         :   3507
         Data Percentage   :   1.80
         LV Size           :   616.00m
    Copy to Clipboard Toggle word wrap
  • Configuring Snapshot Behavior

    The configurable parameters for snapshot are:

    • snap-max-hard-limit: If the snapshot count in a volume reaches this limit then no further snapshot creation is allowed. The range is from 1 to 256. Once this limit is reached you have to remove the snapshots to create further snapshots. This limit can be set for the system or per volume. If both system limit and volume limit is configured then the effective max limit would be the lowest of the two value.
    • snap-max-soft-limit: This is a percentage value. The default value is 90%. This configuration works along with auto-delete feature. If auto-delete is enabled then it will delete the oldest snapshot when snapshot count in a volume crosses this limit. When auto-delete is disabled it will not delete any snapshot, but it will display a warning message to the user.
    • auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled. When enabled it will delete the oldest snapshot when snapshot count in a volume crosses the snap-max-soft-limit. When disabled it will not delete any snapshot, but it will display a warning message to the user
    • Displaying the Configuration Values

      To display the existing configuration values for a volume or the entire cluster, run the following command:

      # gluster snapshot config [VOLNAME]
      Copy to Clipboard Toggle word wrap
      where:
      • VOLNAME: This is an optional field. The name of the volume for which the configuration values are to be displayed.
      If the volume name is not provided then the configuration values of all the volume is displayed. System configuration details are displayed irrespective of whether the volume name is specified or not.
      For Example:
      # gluster snapshot config
      
      Snapshot System Configuration:
      snap-max-hard-limit : 256
      snap-max-soft-limit : 90%
      auto-delete : disable
      
      Snapshot Volume Configuration:
      
      Volume : test_vol
      snap-max-hard-limit : 256
      Effective snap-max-hard-limit : 256
      Effective snap-max-soft-limit : 230 (90%)
      
      Volume : test_vol1
      snap-max-hard-limit : 256
      Effective snap-max-hard-limit : 256
      Effective snap-max-soft-limit : 230 (90%)
      Copy to Clipboard Toggle word wrap
    • Changing the Configuration Values

      To change the existing configuration values, run the following command:

      # gluster snapshot config [VOLNAME] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])
      Copy to Clipboard Toggle word wrap
      where:
      • VOLNAME: This is an optional field. The name of the volume for which the configuration values are to be changed. If the volume name is not provided, then running the command will set or change the system limit.
      • snap-max-hard-limit: Maximum hard limit for the system or the specified volume.
      • snap-max-soft-limit: Soft limit mark for the system.
      • auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled.
      For Example:
      # gluster snapshot config test_vol snap-max-hard-limit 100
      Changing snapshot-max-hard-limit will lead to deletion of snapshots if
      they exceed the new limit.
      Do you want to continue? (y/n) y
      snapshot config: snap-max-hard-limit for test_vol set successfully
      Copy to Clipboard Toggle word wrap
  • Activating and Deactivating a Snapshot

    Only activated snapshots are accessible. Check the Accessing Snapshot section for more details. Since each snapshot is a Red Hat Storage volume it consumes some resources hence if the snapshots are not needed it would be good to deactivate them and activate them when required. To activate a snapshot run the following command:

    # gluster snapshot activate <snapname> [force]
    Copy to Clipboard Toggle word wrap
    where:
    • snapname: Name of the snap to be activated.
    • force: If some of the bricks of the snapshot volume are down then use the force command to start them.
    For Example:
    # gluster snapshot activate snap1
    Copy to Clipboard Toggle word wrap
    To deactivate a snapshot, run the following command:
    # gluster snapshot deactivate <snapname>
    Copy to Clipboard Toggle word wrap
    where:
    • snapname: Name of the snap to be deactivated.
    For example:
    # gluster snapshot deactivate snap1
    Copy to Clipboard Toggle word wrap
  • Deleting Snapshot

    Before deleting a snapshot ensure that the following prerequisites are met:

    • Snapshot with the specified name should be present.
    • Red Hat Storage nodes should be in quorum.
    • No volume operation (e.g. add-brick, rebalance, etc) should be running on the original / parent volume of the snapshot.
    To delete a snapshot run the following command:
    # gluster snapshot delete <snapname>
    Copy to Clipboard Toggle word wrap
    where,
    • snapname - The name of the snapshot to be deleted.
    For Example:
    # gluster snapshot delete snap2
    Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
    snapshot delete: snap2: snap removed successfully
    Copy to Clipboard Toggle word wrap

    Note

    Red Hat Storage volume cannot be deleted if any snapshot is associated with the volume. You must delete all the snapshots before issuing a volume delete.
  • Restoring Snapshot

    Before restoring a snapshot ensure that the following prerequisites are met

    • The specified snapshot has to be present
    • The original / parent volume of the snapshot has to be in a stopped state.
    • Red Hat Storage nodes have to be in quorum.
    • If you have a Hadoop enabled Red Hat Storage volume, you must ensure to stop all the Hadoop Services in Ambari.
    • No volume operation (e.g. add-brick, rebalance, etc) should be running on the origin or parent volume of the snapshot.
      # gluster snapshot restore <snapname>
      Copy to Clipboard Toggle word wrap
      where,
      • snapname - The name of the snapshot to be restored.
      For Example:
      # gluster snapshot restore snap1
      Snapshot restore: snap1: Snap restored successfully
      Copy to Clipboard Toggle word wrap
      After snapshot is restored and the volume is started, trigger a self-heal by running the following command:
      # gluster volume heal VOLNAME full
      Copy to Clipboard Toggle word wrap
      If you have a Hadoop enabled Red Hat Storage volume, you must start all the Hadoop Services in Ambari.

      Note

    • In the cluster, identify the nodes participating in the snapshot with the snapshot status command. For example:
       # gluster snapshot status snapname
      							
          Snap Name : snapname
          Snap UUID : bded7c02-8119-491b-a7e1-cc8177a5a1cd
      
          	Brick Path        :   10.70.43.46:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick2/brick2
          	Volume Group      :   snap_lvgrp
          	Brick Running     :   Yes
          	Brick PID         :   8303
          	Data Percentage   :   0.43
          	LV Size           :   2.60g
      
      
          	Brick Path        :   10.70.42.33:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick3/brick3
          	Volume Group      :   snap_lvgrp
          	Brick Running     :   Yes
          	Brick PID         :   4594
          	Data Percentage   :   42.63
          	LV Size           :   2.60g
      
      
          	Brick Path        :   10.70.42.34:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick4/brick4
          	Volume Group      :   snap_lvgrp
          	Brick Running     :   Yes
          	Brick PID         :   23557
          	Data Percentage   :   12.41
          	LV Size           :   2.60g
      
      Copy to Clipboard Toggle word wrap
      • In the nodes identified above, check if the geo-replication repository is present in /var/lib/glusterd/snaps/snapname. If the repository is present in any of the nodes, ensure that the same is present in /var/lib/glusterd/snaps/snapname throughout the cluster. If the geo-replication repository is missing in any of the nodes in the cluster, copy it to /var/lib/glusterd/snaps/snapname in that node.
      • Restore snapshot of the volume using the following command:
        # gluster snapshot restore snapname
        Copy to Clipboard Toggle word wrap
    Restoring Snapshot of a Geo-replication Volume

    If you have a geo-replication setup, then perform the following steps to restore snapshot:

    1. Stop the geo-replication session.
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
      Copy to Clipboard Toggle word wrap
    2. Stop the slave volume and then the master volume.
      # gluster volume stop VOLNAME
      Copy to Clipboard Toggle word wrap
    3. Restore snapshot of the slave volume and the master volume.
      # gluster snapshot restore snapname
      Copy to Clipboard Toggle word wrap
    4. Start the slave volume first and then the master volume.
      # gluster volume start VOLNAME
      Copy to Clipboard Toggle word wrap
    5. Start the geo-replication session.
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
      
      Copy to Clipboard Toggle word wrap
    6. Resume the geo-replication session.
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
      
      Copy to Clipboard Toggle word wrap
  • Accessing Snapshots

    Snapshot of a Red Hat Storage volume can be accessed only via FUSE mount. Use the following command to mount the snapshot.

    mount -t glusterfs <hostname>:/snaps/<snapname>/parent-VOLNAME /mount_point
    Copy to Clipboard Toggle word wrap
    • parent-VOLNAME - Volume name for which we have created the snapshot.
      For example,
      # mount -t glusterfs myhostname:/snaps/snap1/test_vol /mnt
      Copy to Clipboard Toggle word wrap
    Since the Red Hat Storage snapshot volume is read-only, no write operations are allowed on this mount. After mounting the snapshot the entire snapshot content can then be accessed in a read-only mode.

    Note

    NFS and CIFS mount of snapshot volume is not supported.
    Snapshots can also be accessed via User Serviceable Snapshots. For more information see, Section 12.3, “User Serviceable Snapshots”

Warning

External snapshots, such as snapshots of a virtual machine/instance, where Red Hat Storage Server is installed as a guest OS or FC/iSCSI SAN snapshots are not supported.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat