12.2. Snapshot Commands
- Creating Snapshot
Before creating a snapshot ensure that the following prerequisites are met:
- Red Hat Storage volume has to be present and the volume has to be in the
Started
state. - All the bricks of the volume have to be on an independent thin logical volume(LV).
- Snapshot names must be unique in the cluster.
- All the bricks of the volume should be up and running, unless it is a n-way replication where n >= 3. In such case quorum must be met. For more information see Chapter 12, Managing Snapshots
- No other volume operation, like
rebalance
,add-brick
, etc, should be running on the volume. - Total number of snapshots in the volume should not be equal to Effective snap-max-hard-limit. For more information see Configuring Snapshot Behavior.
- If you have a geo-replication setup, then pause the geo-replication session if it is running, by executing the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL pause
For example,# gluster volume geo-replication master-vol example.com::slave-vol pause Pausing geo-replication session between master-vol example.com::slave-vol has been successful
Ensure that you take the snapshot of the master volume and then take snapshot of the slave volume. - If you have a Hadoop enabled Red Hat Storage volume, you must ensure to stop all the Hadoop Services in Ambari.
To create a snapshot of the volume, run the following command:# gluster snapshot create <snapname> VOLNAME(S) [description <description>] [force]
where,- snapname - Name of the snapshot that will be created. It should be a unique name in the entire cluster.
- VOLNAME(S) - Name of the volume for which the snapshot will be created. We only support creating snapshot of single volume.
- description - This is an optional field that can be used to provide a description of the snap that will be saved along with the snap.
force
- Snapshot creation will fail if any brick is down. In a n-way replicated Red Hat Storage volume where n >= 3 snapshot is allowed even if some of the bricks are down. In such case quorum is checked. Quorum is checked only when theforce
option is provided, else by-default the snapshot create will fail if any brick is down. Refer the Overview section for more details on quorum.
For Example:# gluster snapshot create snap1 vol1 snapshot create: success: Snap snap1 created successfully
Snapshot of a Red Hat Storage volume creates a read-only Red Hat Storage volume. This volume will have identical configuration as of the original / parent volume. Bricks of this newly created snapshot is mounted as/var/run/gluster/snaps/<snap-volume-name>/brick<bricknumber>
.For example, a snapshot with snap volume name0888649a92ea45db8c00a615dfc5ea35
and having two bricks will have the following two mount points:/var/run/gluster/snaps/0888649a92ea45db8c00a615dfc5ea35/brick1 /var/run/gluster/snaps/0888649a92ea45db8c00a615dfc5ea35/brick2
These mounts can also be viewed using thedf
ormount
command.Note
If you have a geo-replication setup, after creating the snapshot, resume the geo-replication session by running the following command:# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
For example,# gluster volume geo-replication master-vol example.com::slave-vol resume Resuming geo-replication session between master-vol example.com::slave-vol has been successful
- Listing of Available Snapshots
To list all the snapshots that are taken for a specific volume, run the following command:
# gluster snapshot list [VOLNAME]
where,- VOLNAME - This is an optional field and if provided lists the snapshot names of all snapshots present in the volume.
For Example:# gluster snapshot list snap3 # gluster snapshot list test_vol No snapshots present
- Getting Information of all the Available Snapshots
The following command provides the basic information of all the snapshots taken. By default the information of all the snapshots in the cluster is displayed:
# gluster snapshot info [(<snapname> | volume VOLNAME)]
where,- snapname - This is an optional field. If the snapname is provided then the information about the specified snap is displayed.
- VOLNAME - This is an optional field. If the VOLNAME is provided the information about all the snaps in the specified volume is displayed.
For Example:# gluster snapshot info snap3 Snapshot : snap3 Snap UUID : b2a391ce-f511-478f-83b7-1f6ae80612c8 Created : 2014-06-13 09:40:57 Snap Volumes: Snap Volume Name : e4a8f4b70a0b44e6a8bff5da7df48a4d Origin Volume name : test_vol1 Snaps taken for test_vol1 : 1 Snaps available for test_vol1 : 255 Status : Started
- Getting the Status of Available Snapshots
This command displays the running status of the snapshot. By default the status of all the snapshots in the cluster is displayed. To check the status of all the snapshots that are taken for a particular volume, specify a volume name:
# gluster snapshot status [(<snapname> | volume VOLNAME)]
where,- snapname - This is an optional field. If the snapname is provided then the status about the specified snap is displayed.
- VOLNAME - This is an optional field. If the VOLNAME is provided the status about all the snaps in the specified volume is displayed.
For Example:# gluster snapshot status snap3 Snap Name : snap3 Snap UUID : b2a391ce-f511-478f-83b7-1f6ae80612c8 Brick Path : 10.70.42.248:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick1/brick1 Volume Group : snap_lvgrp1 Brick Running : Yes Brick PID : 1640 Data Percentage : 1.54 LV Size : 616.00m Brick Path : 10.70.43.139:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick2/brick3 Volume Group : snap_lvgrp1 Brick Running : Yes Brick PID : 3900 Data Percentage : 1.80 LV Size : 616.00m Brick Path : 10.70.43.34:/var/run/gluster/snaps/e4a8f4b70a0b44e6a8bff5da7df48a4d/brick3/brick4 Volume Group : snap_lvgrp1 Brick Running : Yes Brick PID : 3507 Data Percentage : 1.80 LV Size : 616.00m
- Configuring Snapshot Behavior
The configurable parameters for snapshot are:
- snap-max-hard-limit: If the snapshot count in a volume reaches this limit then no further snapshot creation is allowed. The range is from 1 to 256. Once this limit is reached you have to remove the snapshots to create further snapshots. This limit can be set for the system or per volume. If both system limit and volume limit is configured then the effective max limit would be the lowest of the two value.
- snap-max-soft-limit: This is a percentage value. The default value is 90%. This configuration works along with auto-delete feature. If auto-delete is enabled then it will delete the oldest snapshot when snapshot count in a volume crosses this limit. When auto-delete is disabled it will not delete any snapshot, but it will display a warning message to the user.
- auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled. When enabled it will delete the oldest snapshot when snapshot count in a volume crosses the snap-max-soft-limit. When disabled it will not delete any snapshot, but it will display a warning message to the user
- Displaying the Configuration Values
To display the existing configuration values for a volume or the entire cluster, run the following command:
# gluster snapshot config [VOLNAME]
where:- VOLNAME: This is an optional field. The name of the volume for which the configuration values are to be displayed.
If the volume name is not provided then the configuration values of all the volume is displayed. System configuration details are displayed irrespective of whether the volume name is specified or not.For Example:# gluster snapshot config Snapshot System Configuration: snap-max-hard-limit : 256 snap-max-soft-limit : 90% auto-delete : disable Snapshot Volume Configuration: Volume : test_vol snap-max-hard-limit : 256 Effective snap-max-hard-limit : 256 Effective snap-max-soft-limit : 230 (90%) Volume : test_vol1 snap-max-hard-limit : 256 Effective snap-max-hard-limit : 256 Effective snap-max-soft-limit : 230 (90%)
- Changing the Configuration Values
To change the existing configuration values, run the following command:
# gluster snapshot config [VOLNAME] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])
where:- VOLNAME: This is an optional field. The name of the volume for which the configuration values are to be changed. If the volume name is not provided, then running the command will set or change the system limit.
- snap-max-hard-limit: Maximum hard limit for the system or the specified volume.
- snap-max-soft-limit: Soft limit mark for the system.
- auto-delete: This will enable or disable auto-delete feature. By default auto-delete is disabled.
For Example:# gluster snapshot config test_vol snap-max-hard-limit 100 Changing snapshot-max-hard-limit will lead to deletion of snapshots if they exceed the new limit. Do you want to continue? (y/n) y snapshot config: snap-max-hard-limit for test_vol set successfully
- Activating and Deactivating a Snapshot
Only activated snapshots are accessible. Check the Accessing Snapshot section for more details. Since each snapshot is a Red Hat Storage volume it consumes some resources hence if the snapshots are not needed it would be good to deactivate them and activate them when required. To activate a snapshot run the following command:
# gluster snapshot activate <snapname> [force]
where:- snapname: Name of the snap to be activated.
force
: If some of the bricks of the snapshot volume are down then use theforce
command to start them.
For Example:# gluster snapshot activate snap1
To deactivate a snapshot, run the following command:# gluster snapshot deactivate <snapname>
where:- snapname: Name of the snap to be deactivated.
For example:# gluster snapshot deactivate snap1
- Deleting Snapshot
Before deleting a snapshot ensure that the following prerequisites are met:
- Snapshot with the specified name should be present.
- Red Hat Storage nodes should be in quorum.
- No volume operation (e.g. add-brick, rebalance, etc) should be running on the original / parent volume of the snapshot.
To delete a snapshot run the following command:# gluster snapshot delete <snapname>
where,- snapname - The name of the snapshot to be deleted.
For Example:# gluster snapshot delete snap2 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap2: snap removed successfully
Note
Red Hat Storage volume cannot be deleted if any snapshot is associated with the volume. You must delete all the snapshots before issuing a volume delete.
- Restoring Snapshot
Before restoring a snapshot ensure that the following prerequisites are met
- The specified snapshot has to be present
- The original / parent volume of the snapshot has to be in a stopped state.
- Red Hat Storage nodes have to be in quorum.
- If you have a Hadoop enabled Red Hat Storage volume, you must ensure to stop all the Hadoop Services in Ambari.
- No volume operation (e.g. add-brick, rebalance, etc) should be running on the origin or parent volume of the snapshot.
# gluster snapshot restore <snapname>
where,- snapname - The name of the snapshot to be restored.
For Example:# gluster snapshot restore snap1 Snapshot restore: snap1: Snap restored successfully
After snapshot is restored and the volume is started, trigger a self-heal by running the following command:# gluster volume heal VOLNAME full
If you have a Hadoop enabled Red Hat Storage volume, you must start all the Hadoop Services in Ambari.Note
- The snapshot will be deleted once it is restored. To restore to the same point again take a snapshot explicitly after restoring the snapshot.
- After restore the brick path of the original volume will change. If you are using
fstab
to mount the bricks of the origin volume then you have to fixfstab
entries after restore. For more information see, https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/apcs04s07.html
- In the cluster, identify the nodes participating in the snapshot with the snapshot status command. For example:
# gluster snapshot status snapname Snap Name : snapname Snap UUID : bded7c02-8119-491b-a7e1-cc8177a5a1cd Brick Path : 10.70.43.46:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick2/brick2 Volume Group : snap_lvgrp Brick Running : Yes Brick PID : 8303 Data Percentage : 0.43 LV Size : 2.60g Brick Path : 10.70.42.33:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick3/brick3 Volume Group : snap_lvgrp Brick Running : Yes Brick PID : 4594 Data Percentage : 42.63 LV Size : 2.60g Brick Path : 10.70.42.34:/var/run/gluster/snaps/816e8403874f43a78296decd7c127205/brick4/brick4 Volume Group : snap_lvgrp Brick Running : Yes Brick PID : 23557 Data Percentage : 12.41 LV Size : 2.60g
- In the nodes identified above, check if the
geo-replication
repository is present in/var/lib/glusterd/snaps/snapname
. If the repository is present in any of the nodes, ensure that the same is present in/var/lib/glusterd/snaps/snapname
throughout the cluster. If thegeo-replication
repository is missing in any of the nodes in the cluster, copy it to/var/lib/glusterd/snaps/snapname
in that node. - Restore snapshot of the volume using the following command:
# gluster snapshot restore snapname
Restoring Snapshot of a Geo-replication VolumeIf you have a geo-replication setup, then perform the following steps to restore snapshot:
- Stop the geo-replication session.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Stop the slave volume and then the master volume.
# gluster volume stop VOLNAME
- Restore snapshot of the slave volume and the master volume.
# gluster snapshot restore snapname
- Start the slave volume first and then the master volume.
# gluster volume start VOLNAME
- Start the geo-replication session.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
- Resume the geo-replication session.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
- Accessing Snapshots
Snapshot of a Red Hat Storage volume can be accessed only via FUSE mount. Use the following command to mount the snapshot.
mount -t glusterfs <hostname>:/snaps/<snapname>/parent-VOLNAME /mount_point
- parent-VOLNAME - Volume name for which we have created the snapshot.For example,
# mount -t glusterfs myhostname:/snaps/snap1/test_vol /mnt
Since the Red Hat Storage snapshot volume is read-only, no write operations are allowed on this mount. After mounting the snapshot the entire snapshot content can then be accessed in a read-only mode.Note
NFS and CIFS mount of snapshot volume is not supported.Snapshots can also be accessed via User Serviceable Snapshots. For more information see, Section 12.3, “User Serviceable Snapshots”
Warning