Chapter 3. Live migration of images
As a storage administrator, you can live-migrate RBD images between different pools or even with the same pool, within the same storage cluster.
You can migrate between different images formats and layouts and even from external data sources. When live migration is initiated, the source image is deep copied to the destination image, pulling all snapshot history while preserving the sparse allocation of data where possible.
Images with encryption support live migration.
Currently, the krbd
kernel module does not support live migration.
Prerequisites
- A running Red Hat Ceph Storage cluster.
3.1. The live migration process
By default, during the live migration of the RBD images with the same storage cluster, the source image is marked read-only. All clients redirect the Input/Output (I/O) to the new target image. Additionally, this mode can preserve the link to the source image’s parent to preserve sparseness, or it can flatten the image during the migration to remove the dependency on the source image’s parent. You can use the live migration process in an import-only mode, where the source image remains unmodified. You can link the target image to an external data source, such as a backup file, HTTP(s) file, or an S3 object or an NBD export. The live migration copy process can safely run in the background while the new target image is being used.
The live migration process consists of three steps:
Prepare Migration: The first step is to create new target image and link the target image to the source image. If the import-only mode is not configured, the source image will also be linked to the target image and marked read-only. Attempts to read uninitialized data extents within the target image will internally redirect the read to the source image, and writes to uninitialized extents within the target image will internally deep copy, the overlapping source image extents to the target image.
Execute Migration: This is a background operation that deep-copies all initialized blocks from the source image to the target. You can run this step when clients are actively using the new target image.
Finish Migration: You can commit or abort the migration, once the background migration process is completed. Committing the migration removes the cross-links between the source and target images, and will remove the source image if not configured in the import-only mode. Aborting the migration remove the cross-links, and will remove the target image.
3.2. Formats
Native
, qcow
and raw
formats are currently supported.
You can use the native
format to describe a native RBD image within a Red Hat Ceph Storage cluster as the source image. The source-spec
JSON document is encoded as:
Syntax
{ "type": "native", ["cluster_name": "<cluster-name>",] (specify if image in another cluster, requires<cluster-name>.conf
file) ["client_name": "<client-name>",] (for connecting to another cluster, default isclient.admin
) "pool_name": "POOL_NAME", ["pool_id": "POOL_ID",] (optional, alternative to "POOL_NAME" key) ["pool_namespace": "POOL_NAMESPACE",] (optional) "image_name": "IMAGE_NAME>", ["image_id": "IMAGE_ID",] (specify if image is in trash) "snap_name": "SNAP_NAME", ["snap_id": "SNAP_ID",] (optional, alternative to "SNAP_NAME" key) }
Note that the native
format does not include the stream object since it utilizes native Ceph operations. For example, to import from the image rbd/ns1/image1@snap1
, the source-spec
could be encoded as:
Example
{ "type": "native", "pool_name": "rbd", "pool_namespace": "ns1", "image_name": "image1", "snap_name": "snap1" }
You can use the qcow
format to describe a QEMU copy-on-write (QCOW) block device. Both the QCOW v1 and v2 formats are currently supported with the exception of advanced features such as compression, encryption, backing files, and external data files. You can link the qcow
format data to any supported stream source:
Example
{ "type": "qcow", "stream": { "type": "file", "file_path": "/mnt/image.qcow" } }
You can use the raw
format to describe a thick-provisioned, raw block device export that is rbd export –export-format 1 SNAP_SPEC
. You can link the raw
format data to any supported stream source:
Example
{ "type": "raw", "stream": { "type": "file", "file_path": "/mnt/image-head.raw" }, "snapshots": [ { "type": "raw", "name": "snap1", "stream": { "type": "file", "file_path": "/mnt/image-snap1.raw" } }, ] (optional oldest to newest ordering of snapshots) }
The inclusion of the snapshots
array is optional and currently only supports thick-provisioned raw snapshot exports.
3.3. Streams
Currently, file
, HTTP
, S3
and NBD
streams are supported.
File stream
You can use the file
stream to import from a locally accessible POSIX file source.
Syntax
{
<format unique parameters>
"stream": {
"type": "file",
"file_path": "FILE_PATH"
}
}
For example, to import a raw-format image from a file located at /mnt/image.raw
, the source-spec
JSON file is:
Example
{ "type": "raw", "stream": { "type": "file", "file_path": "/mnt/image.raw" } }
HTTP stream
You can use the HTTP
stream to import from a remote HTTP or HTTPS web server.
Syntax
{
<format unique parameters>
"stream": {
"type": "http",
"url": "URL_PATH"
}
}
For example, to import a raw-format image from a file located at http://download.ceph.com/image.raw
, the source-spec
JSON file is:
Example
{ "type": "raw", "stream": { "type": "http", "url": "http://download.ceph.com/image.raw" } }
S3 stream
You can use the s3
stream to import from a remote S3 bucket.
Syntax
{ <format unique parameters> "stream": { "type": "s3", "url": "URL_PATH", "access_key": "ACCESS_KEY", "secret_key": "SECRET_KEY" } }
For example, to import a raw-format image from a file located at http://s3.ceph.com/bucket/image.raw
, its source-spec JSON is encoded as follows:
Example
{ "type": "raw", "stream": { "type": "s3", "url": "http://s3.ceph.com/bucket/image.raw", "access_key": "NX5QOQKC6BH2IDN8HC7A", "secret_key": "LnEsqNNqZIpkzauboDcLXLcYaWwLQ3Kop0zAnKIn" } }
NBD stream
You can use the NBD stream to import from a remote NBD export.
Syntax
{ <format unique parameters> "stream": { "type": "nbd", "uri": "<nbd-uri>", } }
For example, to import a raw-format image from an NBD export located at nbd://nbd.ceph.com/image.raw
, its source-spec JSON is encoded as follows: .Example
{ "type": "raw", "stream": { "type": "nbd", "uri": "nbd://nbd.ceph.com/image.raw", } }
nbd-uri
parameter must follow the NBD URI specification. The default NBD port is 10809
.
3.4. Preparing the live migration process
You can prepare the default live migration process for RBD images within the same Red Hat Ceph Storage cluster. The rbd migration prepare
command accepts all the same layout options as the rbd create
command. The rbd create
command allows changes to the on-disk layout of the immutable image. If you only want to change the on-disk layout and want to keep the original image name, skip the migration_target
argument. All clients using the source image must be stopped before preparing a live migration. The prepare
step will fail if it finds any running clients with the image open in read/write mode. You can restart the clients using the new target image once the prepare
step is completed.
You cannot restart the clients using the source image as it will result in a failure.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Two block device pools.
- One block device image.
Cloned images are implicitly flattened during importing (using --import-only
parameter) and these images are disassociated from any parent chain in the source cluster when migrated to another Ceph cluster.
Procedure
Optional: If you are migrating the image from one Ceph cluster to another, copy the
ceph.conf
andceph.client.admin.keyring
of both the clusters to a common node. This ensures the client node has access to both clusters for migration.Example
Copying
ceph.conf
andceph.client.admin.keyring
of cluster c1 to a common node:[root@rbd1-client /]# scp /etc/ceph/ceph.conf root@10.0.67.67:/etc/ceph/c1.conf root@10.0.67.67's password: ceph.conf 100% 263 1.2MB/s 00:00 [root@rbd1-client /]# scp /etc/ceph/ceph.client.admin.keyring root@10.0.67.67:/etc/ceph/c1.keyring root@10.0.67.67's password: ceph.client.admin.keyring
Copying
ceph.conf
andceph.client.admin.keyring
of cluster c2 to a common node:[root@rbd2-client]# scp /etc/ceph/ceph.conf root@10.0.67.67:/etc/ceph/c2.conf ceph.conf 100% 261 864.5KB/s 00:00 [root@rbd2-client]# scp /etc/ceph/ceph.client.admin.keyring root@10.0.67.67:/etc/ceph/c2.keyring root@10.0.67.67's password: ceph.client.admin.keyring
Prepare the live migration within the storage cluster:
Syntax
rbd migration prepare SOURCE_POOL_NAME/SOURCE_IMAGE_NAME TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration prepare sourcepool1/sourceimage1 targetpool1/sourceimage1
OR
If you want to rename the source image:
Syntax
rbd migration prepare SOURCE_POOL_NAME/SOURCE_IMAGE_NAME TARGET_POOL_NAME/NEW_SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration prepare sourcepool1/sourceimage1 targetpool1/newsourceimage1
In the example,
newsourceimage1
is the renamed source image.You can check the current state of the live migration process with the following command:
Syntax
rbd status TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd status targetpool1/sourceimage1 Watchers: none Migration: source: sourcepool1/sourceimage1 (adb429cb769a) destination: targetpool2/testimage1 (add299966c63) state: prepared
ImportantDuring the migration process, the source image is moved into the RBD trash to prevent mistaken usage.
Example
[ceph: root@rbd-client /]# rbd info sourceimage1 rbd: error opening image sourceimage1: (2) No such file or directory
Example
[ceph: root@rbd-client /]# rbd trash ls --all sourcepool1 adb429cb769a sourceimage1
3.5. Preparing import-only migration
You can initiate the import-only
live migration process by running the rbd migration prepare
command with the --import-only
and either, --source-spec
or --source-spec-path
options, passing a JSON document that describes how to access the source image data directly on the command line or from a file.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A bucket and an S3 object are created.
Procedure
Create a JSON file:
Example
[ceph: root@rbd-client /]# cat testspec.json { "type": "raw", "stream": { "type": "s3", "url": "http:10.74.253.18:80/testbucket1/image.raw", "access_key": "RLJOCP6345BGB38YQXI5", "secret_key": "oahWRB2ote2rnLy4dojYjDrsvaBADriDDgtSfk6o" }
Prepare the
import-only
live migration process:Syntax
rbd migration prepare --import-only --source-spec-path "JSON_FILE" TARGET_POOL_NAME
Example
[ceph: root@rbd-client /]# rbd migration prepare --import-only --source-spec-path "testspec.json" targetpool1
NoteThe
rbd migration prepare
command accepts all the same image options as therbd create
command.You can check the status of the
import-only
live migration:Example
[ceph: root@rbd-client /]# rbd status targetpool1/sourceimage1 Watchers: none Migration: source: {"stream":{"access_key":"RLJOCP6345BGB38YQXI5","secret_key":"oahWRB2ote2rnLy4dojYjDrsvaBADriDDgtSfk6o","type":"s3","url":"http://10.74.253.18:80/testbucket1/image.raw"},"type":"raw"} destination: targetpool1/sourceimage1 (b13865345e66) state: prepared
The following example shows migrating data from Ceph cluster
c1
to Ceph clusterc2
:Example
[ceph: root@rbd-client /]# cat /tmp/native_spec { "cluster_name": "c1", "type": "native", "pool_name": "pool1", "image_name": "image1", "snap_name": "snap1" } ceph: root@rbd-client /]# rbd migration prepare --import-only --source-spec-path /tmp/native_spec c2pool1/c2image1 --cluster c2 ceph: root@rbd-client /]# rbd migration execute c2pool1/c2image1 --cluster c2 Image migration: 100% complete...done. ceph: root@rbd-client /]# rbd migration commit c2pool1/c2image1 --cluster c2 Commit image migration: 100% complete...done.
3.6. Executing the live migration process
After you prepare for the live migration, you must copy the image blocks from the source image to the target image.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Two block device pools.
- One block device image.
Procedure
Execute the live migration:
Syntax
rbd migration execute TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration execute targetpool1/sourceimage1 Image migration: 100% complete...done.
You can check the feedback on the progress of the migration block deep-copy process:
Syntax
rbd status TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd status targetpool1/sourceimage1 Watchers: none Migration: source: sourcepool1/testimage1 (adb429cb769a) destination: targetpool1/testimage1 (add299966c63) state: executed
3.7. Committing the live migration process
You can commit the migration, once the live migration has completed deep-copying all the data blocks from the source image to the target image.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Two block device pools.
- One block device image.
Procedure
Commit the migration, once deep-copying is completed:
Syntax
rbd migration commit TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration commit targetpool1/sourceimage1 Commit image migration: 100% complete...done.
Verification
Committing the live migration will remove the cross-links between the source and target images, and also removes the source image from the source pool:
Example
[ceph: root@rbd-client /]# rbd trash list --all sourcepool1
3.8. Aborting the live migration process
You can revert the live migration process. Aborting live migration reverts the prepare and execute steps.
You can abort only if you have not committed the live migration.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Two block device pools.
- One block device image.
Procedure
Abort the live migration process:
Syntax
rbd migration abort TARGET_POOL_NAME/SOURCE_IMAGE_NAME
Example
[ceph: root@rbd-client /]# rbd migration abort targetpool1/sourceimage1 Abort image migration: 100% complete...done.
Verification
When the live migration process is aborted, the target image is deleted and access to the original source image is restored in the source pool:
Example
[ceph: root@rbd-client /]# rbd ls sourcepool1 sourceimage1