8.10. Restoring Snapshot
Before restoring a snapshot ensure that the following prerequisites are met
- The specified snapshot has to be present
- The original / parent volume of the snapshot has to be in a stopped state.
- Red Hat Gluster Storage nodes have to be in quorum.
- No volume operation (e.g. add-brick, rebalance, etc) should be running on the origin or parent volume of the snapshot.
gluster snapshot restore <snapname>
# gluster snapshot restore <snapname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- snapname - The name of the snapshot to be restored.
For Example:gluster snapshot restore snap1
# gluster snapshot restore snap1 Snapshot restore: snap1: Snap restored successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After snapshot is restored and the volume is started, trigger a self-heal by running the following command:gluster volume heal VOLNAME full
# gluster volume heal VOLNAME full
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- The snapshot will be deleted once it is restored. To restore to the same point again take a snapshot explicitly after restoring the snapshot.
- After restore the brick path of the original volume will change. If you are using
fstab
to mount the bricks of the origin volume then you have to fixfstab
entries after restore. For more information see, https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/apcs04s07.html
- In the cluster, identify the nodes participating in the snapshot with the snapshot status command. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the nodes identified above, check if the
geo-replication
repository is present in/var/lib/glusterd/snaps/snapname
. If the repository is present in any of the nodes, ensure that the same is present in/var/lib/glusterd/snaps/snapname
throughout the cluster. If thegeo-replication
repository is missing in any of the nodes in the cluster, copy it to/var/lib/glusterd/snaps/snapname
in that node. - Restore snapshot of the volume using the following command:
gluster snapshot restore snapname
# gluster snapshot restore snapname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restoring Snapshot of a Geo-replication Volume
If you have a geo-replication setup, then perform the following steps to restore snapshot:
- Stop the geo-replication session.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the slave volume and then the master volume.
gluster volume stop VOLNAME
# gluster volume stop VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restore snapshot of the slave volume and the master volume.
gluster snapshot restore snapname
# gluster snapshot restore snapname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the slave volume first and then the master volume.
gluster volume start VOLNAME
# gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the geo-replication session.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Resume the geo-replication session.
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL resume
Copy to Clipboard Copied! Toggle word wrap Toggle overflow