このコンテンツは選択した言語では利用できません。
16.7. Detaching a Tier from a Volume
To detach a tier, perform the following steps:
- Start the detach tier by executing the following command:
# gluster volume tier VOLNAME detach start
For example,# gluster volume tier test-volume detach start
- Monitor the status of detach tier until the status displays the status as complete.
# gluster volume tier VOLNAME detach status
For example,# gluster volume tier test-volume detach status Node Rebalanced-files size scanned failures skipped status run time in secs -------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed 1.00 server2 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed server2 0 0Bytes 0 0 0 completed
Note
It is possible that some files are not migrated to the cold tier on a detach operation for various reasons like POSIX locks being held on them. Check for files on the hot tier bricks and you can either manually move the files, or turn off applications (which would presumably unlock the files) and stop/start detach tier, to retry. - When the tier is detached successfully as shown in the previous status command, run the following command to commit the tier detach:
# gluster volume tier VOLNAME detach commit
For example,# gluster volume tier test-volume detach commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
Note
When you run
tier detach commit
or tier detach force
, ongoing I/O operations may fail with a Transport endpoint is not connected error.
After the detach tier commit is completed, you can verify that the volume is no longer a tier volume by running
gluster volume info
command.
16.7.1. Detaching a Tier of a Geo-replicated Volume
- Start the detach tier by executing the following command:
# gluster volume tier VOLNAME detach start
For example,# gluster volume tier test-volume detach start
- Monitor the status of detach tier until the status displays the status as complete.
# gluster volume tier VOLNAME detach status
For example,# gluster volume tier test-volume detach status Node Rebalanced-files size scanned failures skipped status run time in secs -------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed 1.00 server2 0 0Bytes 0 0 0 completed 0.00 server1 0 0Bytes 0 0 0 completed server2 0 0Bytes 0 0 0 completed
Note
There could be some number of files that were not moved. Such files may have been locked by the user, and that prevented them from moving to the cold tier on the detach operation. You must check for such files. If you find any such files, you can either manually move the files, or turn off applications (which would presumably unlock the files) and stop/start detach tier, to retry. - Set a checkpoint on a geo-replication session to ensure that all the data in that cold-tier is synced to the slave. For more information on geo-replication checkpoints, see Section 10.4.4.1, “Geo-replication Checkpoints”.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now
For example,# gluster volume geo-replication Volume1 example.com::slave-vol config checkpoint now
- Use the following command to verify the checkpoint completion for the geo-replication session
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail
- Stop geo-replication between the master and slave, using the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
For example:# gluster volume geo-replication Volume1 example.com::slave-vol stop
- Commit the detach tier operation using the following command:
# gluster volume tier VOLNAME detach commit
For example,# gluster volume tier test-volume detach commit Removing tier can result in data loss. Do you want to Continue? (y/n) y volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
After the detach tier commit is completed, you can verify that the volume is no longer a tier volume by runninggluster volume info
command. - Restart the geo-replication sessions, using the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
For example,# gluster volume geo-replication Volume1 example.com::slave-vol start