此内容没有您所选择的语言版本。
Operations Guide
Operating a Red Hat Hyperconverged Infrastructure for Cloud Solution
Abstract
The Red Hat Hyperconverged Infrastructure (RHHI) Cloud solution has three basic operational tasks:
- Updating the overcloud configuration
- Adding nodes to the overcloud
- Removing nodes from the overcloud
Chapter 2. Updating the overcloud configuration 复制链接链接已复制到粘贴板!
At times you will need to update the Red Hat Hyperconverged Infrastructure (RHHI) for Cloud configuration to add new features, or change the way the overcloud functions.
Prerequisite
- A running RHHI for Cloud solution.
Procedure
Do the following step on the Red Hat OpenStack Platform director node, as the stack user.
Rerun the
openstack overcloud deploycommand with the same TripleO Heat templates from the initial overcloud deployment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf adding a new environment file to the overcloud, then add an additional
-eargument to theopenstack overcloud deploycommand.
Additional Resources
- The Red Hat Hyperconverged Infrastructure for Cloud Deployment Guide.
- The Red Hat OpenStack Platform 10 Director Installation and Usage Guide.
Chapter 3. Adding a node to the overcloud 复制链接链接已复制到粘贴板!
The overcloud can grow to meet an increase in demand by adding a new Nova compute and Ceph OSD node to the overcloud.
Prerequisites
- A running RHHI Cloud solution.
- The MAC addresses for the network interface cards (NICs).
- IPMI User name and password
Procedure
Do the following steps on the Red Hat OpenStack Platform director node, as the stack user.
Create and populate a host definition file for the Ironic service to manage the new node.
Create a new JSON host definition file:
touch ~/new_node.json
[stack@director ~]$ touch ~/new_node.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a definition block for the new node between the
nodesstanza square brackets ({"nodes": []}) using this template:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
- IPMI_USER_PASSWORD with the IPMI password.
- NODE_NAME with a descriptive name of the node. This is an optional parameter.
- IPMI_USER_NAME with the IPMI user name that has access to power the node on or off.
- IPMI_IP_ADDR with the IPMI IP address.
- NIC_MAC_ADDR with the network card MAC address handling the PXE boot.
NODE_ROLE-INSTANCE_NUM with the node’s role, along with a node number. This solution uses two roles:
controllerandosd-compute.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Import the nodes into the Ironic database:
openstack baremetal import ~/new_node.json
[stack@director ~]$ openstack baremetal import ~/new_node.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
openstack baremetal importcommand populated the Ironic database with the new node:openstack baremetal node list
[stack@director ~]$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the new node into maintenance mode:
ironic node-set-maintenance $UUID true
ironic node-set-maintenance $UUID trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$UUIDwith the UUID of the new node. See the output from step 2a to get the new node’s UUID.Example
ironic node-set-maintenance 7250678a-a575-4159-840a-e7214e697165 true
[stack@director ~]$ ironic node-set-maintenance 7250678a-a575-4159-840a-e7214e697165 trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Inspect the new node’s hardware:
openstack baremetal introspection start $UUID
openstack baremetal introspection start $UUIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$UUIDwith the UUID of the new node. See the output from step 2a to get the new node’s UUID.Example
openstack baremetal introspection start 7250678a-a575-4159-840a-e7214e697165 true
[stack@director ~]$ openstack baremetal introspection start 7250678a-a575-4159-840a-e7214e697165 trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The introspection process can take some time to complete. Verify that the status of the introspection process:
openstack baremetal introspection bulk status
[stack@director ~]$ openstack baremetal introspection bulk statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Disable maintenance mode on the new node:
ironic node-set-maintenance $UUID false
ironic node-set-maintenance $UUID falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$UUIDwith the UUID of the new node. See the output from step 2a to get the new node’s UUID.Example
ironic node-set-maintenance 7250678a-a575-4159-840a-e7214e697165 false
[stack@director ~]$ ironic node-set-maintenance 7250678a-a575-4159-840a-e7214e697165 falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Assign the full overcloud kernel and ramdisk image to the new node:
openstack baremetal configure boot
[stack@director ~]$ openstack baremetal configure bootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
~/templates/layout.yamlfile for editing.-
Under the
parameter_defaultssection, change theOsdComputeCountoption from3to4. -
Under the
OsdComputeIPssection, add the new node’s IP addresses for each isolated network.
-
Under the
Apply the new overcloud configuration by rerunning the
openstack overcloud deploycommand with the same TripleO Heat templates from the initial overcloud deployment:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the addition of the new node:
openstack server list
[stack@director ~]$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the node status is
ACTIVE, then the new node was added successfully to the overcloud.
Chapter 4. Removing a Node From the Overcloud 复制链接链接已复制到粘贴板!
The Red Hat OpenStack Platform director (RHOSP-d) does not support the removal of a Red Hat Ceph Storage (RHCS) node automatically.
4.1. Prerequisites 复制链接链接已复制到粘贴板!
- Verify there will be enough CPU and RAM to service the workloads.
- Migrate the compute workloads off of the node being removed.
-
Verify that the storage cluster has enough reserve storage capacity to maintain a status of
HEALTH_OK.
This procedure removes the Ceph OSD services from being a member of the storage cluster.
Prerequisite
- A healthy Ceph storage cluster.
Procedure
Do the following steps on one of the Controller/Monitor nodes, and as the root user, unless otherwise stated.
Verify the health status of the Ceph storage cluster:
ceph health
[root@controller ~]# ceph healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow The health status must be
HEALTH_OKbefore continuing on with this procedure.WarningIf the
ceph healthcommand reports that the storage cluster isnear full, then removing a Ceph OSD could result in exceeding or reaching the full ratio limit. This could cause data loss. If the storage cluster isnear full, then contact Red Hat Support before proceeding.Determine the number of Ceph OSDs for removal:
ceph osd tree
[root@controller ~]# ceph osd treeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the total number of OSDs up and in:
ceph osd stat
[root@controller ~]# ceph osd statCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
osdmap e173: 48 osds: 48 up, 48 in flags sortbitwiseosdmap e173: 48 osds: 48 up, 48 in flags sortbitwiseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the Ceph storage cluster from a new terminal session:
ceph -w
[root@controller ~]# ceph -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this terminal session, you can watch as the OSD is removed from the storage cluster. Go back to the original terminal session for the next step.
Mark the OSD out:
ceph osd out $OSD_NUM
ceph osd out $OSD_NUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$OSD_NUMwith the number portion of the OSD name.Example
ceph osd out 0 marked out osd.0.
[root@controller ~]# ceph osd out 0 marked out osd.0.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set all OSDs on the node to out.
NoteIf scripting this step to handle multiple OSDs sequentially, then set a
sleepcommand of at least 10 seconds in between the running of eachceph osd outcommand.
-
Wait for all the placement groups to become
active+cleanand the storage cluster is in aHEALTH_OKstate. You can watch the placement group migration from the new terminal session from step 3. This rebalancing of data can take some time to complete. Verify the health status of the Ceph storage cluster:
ceph health
[root@controller ~]# ceph healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the Compute/OSD node, and as the
rootuser, disable and stop all OSD daemons:systemctl disable ceph-osd.target systemctl stop ceph-osd.target
[root@osdcompute ~]# systemctl disable ceph-osd.target [root@osdcompute ~]# systemctl stop ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OSD from the CRUSH map:
ceph osd crush remove osd.$OSD_NUM
ceph osd crush remove osd.$OSD_NUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$OSD_NUMwith the number portion of the OSD name.Example
ceph osd crush remove osd.0 removed item id 0 name 'osd.0' from crush map
[root@controller ~]# ceph osd crush remove osd.0 removed item id 0 name 'osd.0' from crush mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemoving an OSD from the CRUSH map, causes CRUSH to recompute which OSDs get the placement groups, and rebalances the data accordingly.
Remove the OSD authentication key:
ceph auth del osd.$OSD_NUM
ceph auth del osd.$OSD_NUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$OSD_NUMwith the number portion of the OSD name.Example
ceph osd auth del osd.0 updated
[root@controller ~]# ceph osd auth del osd.0 updatedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Remove the OSD:
ceph osd rm $OSD_NUM
ceph osd rm $OSD_NUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$OSD_NUMwith the number portion of the OSD name.Example
ceph osd rm 0 removed osd.0
[root@controller ~]# ceph osd rm 0 removed osd.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This procedure removes the Nova compute services from being a member of the overcloud, and powers off the hardware.
Prerequisite
- Migrate any running instances to another compute node in the overcloud.
Procedure
Do the following steps on the Red Hat OpenStack Platform director (RHOSP-d) node, as the stack user.
Verify the status of the compute node:
nova service-list
[stack@director ~]$ nova service-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the compute service:
nova service-disable $HOST_NAME nova-compute
nova service-disable $HOST_NAME nova-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
$HOST_NAMEwith the compute’s host name.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Collect the Nova ID of the compute node:
openstack server list
[stack@director ~]$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Write down the Nova UUID, which is in the first column of the command output.
Collect the OpenStack Platform name:
heat stack-list
[stack@director ~]$ heat stack-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Write down the
stack_name, which is the second column of the command output.Delete the compute node by UUID from the overcloud:
openstack overcloud node delete --stack OSP_NAME NOVA_UUID
openstack overcloud node delete --stack OSP_NAME NOVA_UUIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
- OSP_NAME with the `stack_name`from the previous step.
NOVA_UUID with the Nova UUID from the previous step.
Example
openstack overcloud node delete --stack overcloud 6b2a2e71-f9c8-4d5b-aaf8-dada97c90821 deleting nodes [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'] from stack overcloud Started Mistral Workflow. Execution ID: 396f123d-df5b-4f37-b137-83d33969b52b
[stack@director ~]$ openstack overcloud node delete --stack overcloud 6b2a2e71-f9c8-4d5b-aaf8-dada97c90821 deleting nodes [u'6b2a2e71-f9c8-4d5b-aaf8-dada97c90821'] from stack overcloud Started Mistral Workflow. Execution ID: 396f123d-df5b-4f37-b137-83d33969b52bCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the compute node was removed from the overcloud:
openstack server list
[stack@director ~]$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the compute node was successfully removed, then it will not be listed in the above command output.
nova service-list
[stack@director ~]$ nova service-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The removed Nova compute node’s status will be
disabledanddown.Verify that Ironic has powered off the node:
openstack baremetal node list
[stack@director ~]$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The compute node’s power state and availability will be
power offandavailablerespectively. Write down the Nova compute service ID, which is the value in the first column of the above command output.Remove the compute node from the
nova-computeservice from the Nova scheduler:nova service-delete COMPUTE_SERVICE_ID
nova service-delete COMPUTE_SERVICE_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace…
COMPUTE_SERVICE_ID with the Nova compute service ID from the previous step.
Example
nova service-delete 145
[stack@director ~]$ nova service-delete 145Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Addtional Resources 复制链接链接已复制到粘贴板!
- The Red Hat Ceph Storage Administration Guide.
Chapter 5. Using Ceph Block Device Mirroring 复制链接链接已复制到粘贴板!
As a technician, you can mirror Ceph Block Devices to protect the data storage in the block devices.
5.1. Prerequisites 复制链接链接已复制到粘贴板!
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client’s command-line interface.
5.2. Ceph Block Device mirroring 复制链接链接已复制到粘贴板!
Ceph Block Device mirroring is the asynchronous replication of Ceph block device images between two or more Ceph clusters.
Mirroring has these benefits: * Ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. * Serves primarily for recovery from a disaster. * Can run in either an active-passive or active-active configuration; that is, using mandatory exclusive locks and the journaling feature, Ceph records all modifications to an image in the order in which they occur. * Ensures that a crash-consistent mirror of the remote image is available locally.
Before an image can be mirrored to a peer cluster, you must enable journaling.
The CRUSH hierarchies supporting local and remote pools that mirror block device images SHOULD have the same capacity and performance characteristics, and SHOULD have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MiB/s average write throughput to images in the primary cluster, the network must support N * X throughput in the network connection to the secondary site, plus a safety factor of Y% to mirror N images.
The rbd-mirror daemon
The rbd-mirror daemon is responsible for synchronizing images from one Ceph cluster to another. The rbd-mirror package provides the rbd-mirror daemon. Depending on the type of replication, rbd-mirror runs either on a single cluster or on all clusters that participate in mirroring:
- One-way Replication
-
When data is mirrored from a primary cluster to a secondary cluster that serves as a backup,
rbd-mirrorruns ONLY on the backup cluster. RBD mirroring may have multiple secondary sites in an active-passive configuration. - Two-way Replication
-
When the data is mirrored from mirrored from a primary cluster to a secondary cluster and the secondary cluster can mirror back to the primary and each other, both clusters must have
rbd-mirrorrunning. Currently, two-way replication, also known as an active-active configuration, is supported only between two sites.
In two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.
Only run a single rbd-mirror daemon per a Ceph Storage cluster.
Modes for mirroring
Mirroring is configured on a per-pool basis within peer clusters. Red Hat Ceph Storage supports two modes, depending on what images in a pool are mirrored:
- Pool Mode
- Mirror all images in a pool with the journaling feature enabled.
- Image Mode
- Only a specific subset of images within a pool is mirrored and you must enable mirroring for each image separately.
Image states
In an active-passive configuration, the mirrored images are:
Primary
- These mirrored images can be modified.
Non-primary
- These mirrored images cannot be modified.
Images are automatically promoted to primary when mirroring is first enabled on an image. Image promotion can happen implicitly or explicitly based on the mirroring mode. Image promotion happens implicitly when mirroring is enabled in pool mode. Image promotion happens explicitly when mirroring is enabled in image mode. It is also possible to demote primary images and promote non-primary images.
Asynchronous Red Hat Ceph Storage updates
When doing a asynchronous update to a storage cluster using Ceph Block Device mirroring, follow the installation instruction for the update. After the update completes successfully, restart the Ceph Block Device instances.
There is no required order for restarting the ceph-rbd-mirror instances. Red Hat recommends restarting the ceph-rbd-mirror instance pointing to the pool with primary images, followed by the ceph-rbd-mirror instance pointing to the mirrored pool.
Additional Resources
- See the Enabling Journaling section in the Red Hat Ceph Storage Block Device Guide for details.
- See the Recovering from a Disaster section in the Red Hat Ceph Storage Block Device Guide for details.
- See the Enabling Pool Mirroring section in the Red Hat Ceph Storage Block Device Guide for details.
- See the Enabling Mirroring on an Image section in the Red Hat Ceph Storage Block Device Guide for details.
Creating Ceph Storage clusters using the same cluster name, by default the storage cluster name is ceph, can cause a challenge for Ceph Block Device mirroring. For example, some Ceph functions expect a storage cluster named of ceph. When both clusters have the same name, currently you must perform additional steps to configure rbd-mirror:
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
-
Access to the storage cluster or client node where the
rbd-mirrordaemon will be running.
Procedure
As
root, on both storage clusters, specify the storage cluster name by adding theCLUSTERoption to the appropriate file.Example
CLUSTER=master
CLUSTER=masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux
Edit the
/etc/sysconfig/cephfile and add theCLUSTERoption with the Ceph Storage cluster name as the value.Ubuntu
Edit the
/etc/default/cephfile and add theCLUSTERoption with the Ceph Storage cluster name as the value.As
root, and only for the node running therbd-mirrordaemon, create a symbolic link to theceph.conffile:ln -s /etc/ceph/ceph.conf /etc/ceph/master.conf
[root@monitor ~]# ln -s /etc/ceph/ceph.conf /etc/ceph/master.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Now when referring to the storage cluster, use the symbolic link name with the
--clusterflag.Example
--cluster master
--cluster masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Enabling Ceph Block Device Journaling for Mirroring 复制链接链接已复制到粘贴板!
There are two ways to enable the Ceph Block Device journaling feature:
- On image creation.
- Dynamically on already existing images.
Journaling depends on the exclusive-lock feature which must be enabled too.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
Procedure
Enable on Image Creation
As a normal user, execute the following command to enable journaling on image creation creating:
rbd create $IMAGE_NAME --size $MEGABYTES --pool $POOL_NAME --image-feature $FEATURE_NAME[,$FEATURE_NAME]
rbd create $IMAGE_NAME --size $MEGABYTES --pool $POOL_NAME --image-feature $FEATURE_NAME[,$FEATURE_NAME]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd create image-1 --size 1024 --pool pool-1 --image-feature exclusive-lock,journaling
[user@rbd-client ~]$ rbd create image-1 --size 1024 --pool pool-1 --image-feature exclusive-lock,journalingCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable on an Existing Image
As a normal user, execute the following command to enable journaling on an already existing image:
rbd feature enable $POOL_NAME/$IMAGE_NAME $FEATURE_NAME
rbd feature enable $POOL_NAME/$IMAGE_NAME $FEATURE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd feature enable pool-1/image-1 exclusive-lock rbd feature enable pool-1/image-1 journaling
[user@rbd-client ~]$ rbd feature enable pool-1/image-1 exclusive-lock [user@rbd-client ~]$ rbd feature enable pool-1/image-1 journalingCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Setting the Default
To enable journaling on all new images by default, add the following line to the Ceph configuration file,
/etc/ceph/ceph.confby default:rbd default features = 125
rbd default features = 125Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.5. Configuring Ceph Block Device Mirroring on a Pool 复制链接链接已复制到粘贴板!
As a technician, you can enable or disable mirroring on a pool, add or remove a cluster peer, and view information on peers and pools.
5.5.1. Prerequisites 复制链接链接已复制到粘贴板!
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
5.5.2. Enabling Mirroring on a Pool 复制链接链接已复制到粘贴板!
When enabling mirroring on an object pool, you must specify which mirroring mode to use.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
- An existing object pool.
Procedure
As a normal user, execute the following command to enable mirroring on a pool:
rbd mirror pool enable $POOL_NAME $MODE
rbd mirror pool enable $POOL_NAME $MODECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
To enable pool mode:
rbd mirror pool enable data pool
[user@rbd-client ~]$ rbd mirror pool enable data poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable image mode:
rbd mirror pool enable data image
[user@rbd-client ~]$ rbd mirror pool enable data imageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
- See the section called “Modes for mirroring” for details.
5.5.3. Disabling Mirroring on a Pool 复制链接链接已复制到粘贴板!
Before disabling mirroring on a pool, you must remove the cluster peer.
Disabling mirroring on a pool, also disables mirroring on any images within the pool for which mirroring was enabled separately in image mode.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
- Removed as a cluster peer.
- An existing object pool.
Procedure
As a normal user, execute the following command to disable mirroring on a pool:
rbd mirror pool disable $POOL_NAME
rbd mirror pool disable $POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool disable data
[user@rbd-client ~]$ rbd mirror pool disable dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
- See Section 5.5.5, “Removing a Cluster Peer” for details.
5.5.4. Adding a cluster peer 复制链接链接已复制到粘贴板!
In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool.
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
- Access to a Ceph client command-line interface.
- An existing object pool.
Procedure
As a normal user, execute the following command to add a cluster peer:
rbd --cluster $CLUSTER_NAME mirror pool peer add $POOL_NAME $CLIENT_NAME@$TARGET_CLUSTER_NAME
rbd --cluster $CLUSTER_NAME mirror pool peer add $POOL_NAME $CLIENT_NAME@$TARGET_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Examples
Adding the
remotecluster as a peer to thelocalcluster:rbd --cluster local mirror pool peer add data client.remote@remote
[user@rbd-client ~]$ rbd --cluster local mirror pool peer add data client.remote@remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.5.5. Removing a Cluster Peer 复制链接链接已复制到粘贴板!
In order for the rbd-mirror daemon to discover its peer cluster, you must register the peer to the pool.
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
- Access to a Ceph client command-line interface.
- An existing cluster peer.
Procedure
Record the peer’s Universally Unique Identifier (UUID) for use in the next step. To view the peer’s UUID, execute the following command as a normal user:
rbd mirror pool info $POOL_NAME
rbd mirror pool info $POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow As a normal user, execute the following command to remove a cluster peer:
rbd mirror pool peer remove $POOL_NAME $PEER_UUID
rbd mirror pool peer remove $POOL_NAME $PEER_UUIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool peer remove data 55672766-c02b-4729-8567-f13a66893445
[user@rbd-client ~]$ rbd mirror pool peer remove data 55672766-c02b-4729-8567-f13a66893445Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.5.6. Viewing Information About the Cluster Peers 复制链接链接已复制到粘贴板!
You can view basic information about the cluster peers by doing this procedure.
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
- Access to a Ceph client command-line interface.
- An existing cluster peer.
Procedure
As a normal user, execute the following command to view information about the cluster peers:
rbd mirror pool info $POOL_NAME
rbd mirror pool info $POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool info data Enabled: true Peers: UUID NAME CLIENT 786b42ea-97eb-4b16-95f4-867f02b67289 ceph-remote client.admin[user@rbd-client ~]$ rbd mirror pool info data Enabled: true Peers: UUID NAME CLIENT 786b42ea-97eb-4b16-95f4-867f02b67289 ceph-remote client.adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.5.7. Viewing Mirroring Status for a Pool 复制链接链接已复制到粘贴板!
You can view the Ceph Block Device mirroring status for a pool by doing this procedure.
Prerequisites
- Access to a Ceph client command-line interface.
- An existing cluster peer.
- An existing object storage pool.
Procedure
As a normal user, execute the following command to view the mirroring status for a pool:
rbd mirror pool status <pool-name>
rbd mirror pool status <pool-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool status data health: OK images: 1 total
[user@rbd-client ~]$ rbd mirror pool status data health: OK images: 1 totalCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo output more details for every mirrored image in a pool, use the
--verboseoption.
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.5.8. Additional Resources 复制链接链接已复制到粘贴板!
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
5.6. Configuring Ceph Block Device Mirroring on an Image 复制链接链接已复制到粘贴板!
As a technician, you can enable or disable mirroring on an image, add or remove a cluster peer, and view information on peers and pools.
5.6.1. Prerequisites 复制链接链接已复制到粘贴板!
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
5.7. Enabling Image Mirroring 复制链接链接已复制到粘贴板!
This procedure enables Ceph Block Device mirroring on images.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to a Ceph client command-line interface.
- An existing image.
Procedure
As a normal user, execute the following command to enable mirroring on an image:
rbd mirror image enable $POOL_NAME/$IMAGE_NAME
rbd mirror image enable $POOL_NAME/$IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image enable data/image2
[user@rbd-client ~]$ rbd mirror image enable data/image2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
- See the section called “Modes for mirroring” for more details.
included::modules/procedure_rbd_disabling-image-mirroring_en-us.adoc[leveloffset=+1]
included::modules/procedure_rbd_promoting-and-demoting-an-image_en-us.adoc[leveloffset=+1]
included::modules/procedure_rbd_resynchronizing-an-image_en-us.adoc[leveloffset=+1]
included::modules/procedure_rbd_viewing-the-mirroring-status-for-a-single-image_en-us.adoc[leveloffset=+1]
5.7.1. Additional Resources 复制链接链接已复制到粘贴板!
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
Two-way mirroring is an effective active-active mirroring solution suitable for automatic failover.
Prerequisites
Two running Red Hat Ceph Storage clusters located at different sites.
-
Each storage cluster has the corresponding configuration files in the
/etc/ceph/directory.
-
Each storage cluster has the corresponding configuration files in the
One Ceph client, with a connection to both storage clusters.
- Access to the Ceph client’s command-line interface.
An existing object storage pool and an image.
- The same object pool name exists on each storage cluster.
Procedure
Verify that all images within the object storage pool have
exclusive-lockandjournalingenabled:rbd info $POOL_NAME/$IMAGE_NAME
rbd info $POOL_NAME/$IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd info data/image1
[user@rbd-client ~]$ rbd info data/image1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
rbd-mirrorpackage is provided by the Red Hat Ceph Storage Tools repository. Asroot, on a Ceph Monitor node of the local and the remote storage clusters, install therbd-mirrorpackage:Red Hat Enterprise Linux
yum install rbd-mirror
[root@monitor-remote ~]# yum install rbd-mirrorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ubuntu
sudo apt-get install rbd-mirror
[user@monitor-remote ~]$ sudo apt-get install rbd-mirrorCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
rbd-mirrordaemon can run on any node in the storage cluster. It does not have to be a Ceph Monitor or OSD node. However, only onerbd-mirrordaemon per storage cluster.As
root, on both storage clusters, specify the storage cluster name by adding theCLUSTERoption to the appropriate file.Example
CLUSTER=local
CLUSTER=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux
Edit the
/etc/sysconfig/cephfile and add theCLUSTERoption with the Ceph Storage cluster name as the value.Ubuntu
Edit the
/etc/default/cephfile and add theCLUSTERoption with the Ceph Storage cluster name as the value.NoteSee the procedure on handling Ceph Block Device mirroring between two Ceph Storage clusters with the same name.
As a normal user, on both storage clusters, create users with permissions to access the object storage pool and output their keyrings to a file.
ceph auth get-or-create client.$STORAGE_CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=$POOL_NAME' -o $PATH_TO_KEYRING_FILE --cluster $STORAGE_CLUSTER_NAME
ceph auth get-or-create client.$STORAGE_CLUSTER_NAME mon 'profile rbd' osd 'profile rbd pool=$POOL_NAME' -o $PATH_TO_KEYRING_FILE --cluster $STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow On the Ceph Monitor node in the local storage cluster, create the
client.localuser and output the keyring to thelocal.client.local.keyringfile:Example
ceph auth get-or-create client.local mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/local.client.local.keyring --cluster local
[user@monitor-local ~]$ ceph auth get-or-create client.local mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/local.client.local.keyring --cluster localCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the Ceph Monitor node in the remote storage cluster, create the
client.remoteuser and output the keyring to theremote.client.remote.keyringfile:Example
ceph auth get-or-create client.remote mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/remote.client.remote.keyring --cluster remote
[user@monitor-remote ~]$ ceph auth get-or-create client.remote mon 'profile rbd' osd 'profile rbd pool=data' -o /etc/ceph/remote.client.remote.keyring --cluster remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As
root, copy the Ceph configuration file and the newly created keyring file for each storage cluster between each storage cluster and to any Ceph client nodes in the both storage clusters.scp $PATH_TO_STORAGE_CLUSTER_CONFIG_FILE_NAME $SSH_USER_NAME@$MON_NODE:/etc/ceph/ scp $PATH_TO_STORAGE_CLUSTER_KEYRING_FILE_NAME $SSH_USER_NAME@$CLIENT_NODE:/etc/ceph/
scp $PATH_TO_STORAGE_CLUSTER_CONFIG_FILE_NAME $SSH_USER_NAME@$MON_NODE:/etc/ceph/ scp $PATH_TO_STORAGE_CLUSTER_KEYRING_FILE_NAME $SSH_USER_NAME@$CLIENT_NODE:/etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copying Local to Remote Example
scp /etc/ceph/local.conf example@remote:/etc/ceph/ scp /etc/ceph/local.client.local.keyring example@remote:/etc/ceph/
[root@monitor-local ~]# scp /etc/ceph/local.conf example@remote:/etc/ceph/ [root@monitor-local ~]# scp /etc/ceph/local.client.local.keyring example@remote:/etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copying Remote to Local Example
scp /etc/ceph/remote.conf example@local:/etc/ceph/ scp /etc/ceph/remote.client.remote.keyring example@local:/etc/ceph/
[root@monitor-remote ~]# scp /etc/ceph/remote.conf example@local:/etc/ceph/ [root@monitor-remote ~]# scp /etc/ceph/remote.client.remote.keyring example@local:/etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copying both Local and Remote to Clients Example
scp /etc/ceph/local.conf example@rbd-client:/etc/ceph/ scp /etc/ceph/local.client.local.keyring example@rbd-client:/etc/ceph/
[root@monitor-local ~]# scp /etc/ceph/local.conf example@rbd-client:/etc/ceph/ [root@monitor-local ~]# scp /etc/ceph/local.client.local.keyring example@rbd-client:/etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow scp /etc/ceph/remote.conf example@rbd-client:/etc/ceph/ scp /etc/ceph/remote.client.remote.keyring example@rbd-client:/etc/ceph/
[root@monitor-remote ~]# scp /etc/ceph/remote.conf example@rbd-client:/etc/ceph/ [root@monitor-remote ~]# scp /etc/ceph/remote.client.remote.keyring example@rbd-client:/etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, on the Ceph Monitor node of both storage clusters, enable and start therbd-mirrordaemon:systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@$CLIENT_ID systemctl start ceph-rbd-mirror@$CLIENT_ID
systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@$CLIENT_ID systemctl start ceph-rbd-mirror@$CLIENT_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
$CLIENT_IDis the Ceph Storage cluster user that therbd-mirrordaemon will use.Example:
systemctl enable ceph-rbd-mirror.target systemctl enable ceph-rbd-mirror@remote systemctl start ceph-rbd-mirror@remote
[root@monitor-remote ~]# systemctl enable ceph-rbd-mirror.target [root@monitor-remote ~]# systemctl enable ceph-rbd-mirror@remote [root@monitor-remote ~]# systemctl start ceph-rbd-mirror@remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
$CLIENT_IDuser must have the appropriatecephxauthentication access to the storage cluster.
Configuring Two Way Mirroring for Pool Mode
As a normal user, from any Ceph client node that has access to each storage cluster, enable pool mirroring of the object storage pool residing on both storage clusters:
rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAME
rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool enable data pool --cluster local rbd mirror pool enable data pool --cluster remote
[user@rbd-client ~]$ rbd mirror pool enable data pool --cluster local [user@rbd-client ~]$ rbd mirror pool enable data pool --cluster remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that mirroring has been successfully enabled:
rbd mirror pool status $POOL_NAME
rbd mirror pool status $POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool status data health: OK images: 1 total
[user@rbd-client ~]$ rbd mirror pool status data health: OK images: 1 totalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a normal user, add the storage clusters as a peer of the other storage cluster:
rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAME
rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool peer add data client.local@local --cluster remote rbd mirror pool peer add data client.remote@remote --cluster local
[user@rbd-client ~]$ rbd mirror pool peer add data client.local@local --cluster remote [user@rbd-client ~]$ rbd mirror pool peer add data client.remote@remote --cluster localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the storage cluster peer was successfully added:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configuring One Way Mirroring for Image Mode
As a normal user, enable image mirroring of the object storage pool on both storage clusters:
rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAME
rbd mirror pool enable $POOL_NAME $MIRROR_MODE --cluster $STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool enable data image --cluster local rbd mirror pool enable data image --cluster remote
[user@rbd-client ~]$ rbd mirror pool enable data image --cluster local [user@rbd-client ~]$ rbd mirror pool enable data image --cluster remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that mirroring has been successfully enabled:
rbd mirror pool status $POOL_NAME
rbd mirror pool status $POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool status data health: OK images: 1 total
[user@rbd-client ~]$ rbd mirror pool status data health: OK images: 1 totalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As a normal user, add the storage clusters as a peer of the other storage cluster:
rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAME
rbd mirror pool peer add $POOL_NAME $CLIENT_NAME@$STORAGE_CLUSTER_NAME --cluster $PEER_STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool peer add data client.local@local --cluster remote rbd mirror pool peer add data client.remote@remote --cluster local
[user@rbd-client ~]$ rbd mirror pool peer add data client.local@local --cluster remote [user@rbd-client ~]$ rbd mirror pool peer add data client.remote@remote --cluster localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the storage cluster peer was successfully added:
rbd mirror pool info --cluster $STORAGE_CLUSTER_NAME
rbd mirror pool info --cluster $STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
As a normal user, on the local storage cluster, explicitly enable mirroring for the images:
rbd mirror image enable $POOL_NAME/$IMAGE_NAME --cluster $STORAGE_CLUSTER_NAME
rbd mirror image enable $POOL_NAME/$IMAGE_NAME --cluster $STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image enable data/image1 --cluster local Mirroring enabled
[user@rbd-client ~]$ rbd mirror image enable data/image1 --cluster local Mirroring enabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that mirroring has been successfully enabled:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details:
- See the Pools chapter in the Storage Strategies guide for more details.
- See the User Management chapter in the Administration Guide for more details.
- See Section 5.5, “Configuring Ceph Block Device Mirroring on a Pool” for more details.
- See Section 5.6, “Configuring Ceph Block Device Mirroring on an Image” for more details.
- See Section 5.4, “Enabling Ceph Block Device Journaling for Mirroring” for more details.
5.9. Delaying Replication Between Storage Clusters 复制链接链接已复制到粘贴板!
Whether you are using one- or two-way replication, you can delay replication between Ceph Block Device mirroring images. You can implement a replication delay strategy as a cushion of time before unwanted changes to the primary image are propagated to the replicated secondary image. The replication delay can be configured globally or on individual images and must be configured on the destination storage cluster.
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
-
Access to the storage cluster or client node where the
rbd-mirrordaemon will be running.
Procedure
Setting the Replication Delay Globally
As
root, edit the Ceph configuration file, on the node running therbd-mirrordaemon, and add the following line:rbd_mirroring_replay_delay = $MINIMUM_DELAY_IN_SECONDS
rbd_mirroring_replay_delay = $MINIMUM_DELAY_IN_SECONDSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd_mirroring_replay_delay = 600
rbd_mirroring_replay_delay = 600Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Setting the Replication Delay on an Image
- As a normal user, on a Ceph client node, set the replication delay for a specific primary image, execute the following command:
rbd image-meta set $POOL_NAME/$IMAGE_NAME conf_rbd_mirroring_replay_delay $MINIMUM_DELAY_IN_SECONDS
rbd image-meta set $POOL_NAME/$IMAGE_NAME conf_rbd_mirroring_replay_delay $MINIMUM_DELAY_IN_SECONDS
+ .Example
rbd image-meta set data/image1 conf_rbd_mirroring_replay_delay 600
[user@rbd-client ~]$ rbd image-meta set data/image1 conf_rbd_mirroring_replay_delay 600
Additional Resources
- See Section 5.2, “Ceph Block Device mirroring” for more details.
5.10. Recovering From a Disaster 复制链接链接已复制到粘贴板!
The following procedure shows how to failover to the mirrored data on a secondary storage cluster after the primary storage cluster terminated in an orderly or non-orderly manner.
Prerequisites
- Two running Red Hat Ceph Storage clusters located at different sites.
One Ceph client, with a connection to both storage clusters.
- Access to the Ceph client’s command-line interface.
Procedure
Failover After an Orderly Shutdown
- Stop all clients that use the primary image. This step depends on which clients are using the image.
As a normal user, on a Ceph client node, demote the primary image located on the local storage cluster:
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1 --cluster=local
[user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow As a normal user, on a Ceph client node, promote the non-primary image located on the remote storage cluster:
rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote data/image1 --cluster=remote
[user@rbd-client ~]$ rbd mirror image promote data/image1 --cluster=remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resume the access to the peer image. This step depends on which clients are using the image.
Failover After a Non-Orderly Shutdown
- Verify that the primary storage cluster is down.
- Stop all clients that use the primary image. This step depends on which clients are using the image.
As a normal user, on a Ceph client node, promote the non-primary image located on the remote storage cluster. Use the
--forceoption, because the demotion cannot be propagated to the local storage cluster:rbd mirror image promote --force $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image promote --force $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote --force data/image1 --cluster=remote
$ rbd mirror image promote --force data/image1 --cluster=remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resume the access to the peer image. This step depends on which clients are using the image.
Failing Back to the Primary Storage Cluster
- Verify the primary storage cluster is available.
If there was a non-orderly shutdown, as a normal user, on a Ceph client node, demote the primary image located on the local storage cluster:
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1 --cluster=local
[user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Resynchronize the image ONLY if there was a non-orderly shutdown. As a normal user, on a Ceph client node, resynchronize the image:
rbd mirror image resync $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image resync $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image resync data/image1 --cluster=local
[user@rbd-client ~]$ rbd mirror image resync data/image1 --cluster=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that resynchronization is complete and is in the
up+replayingstate. As a normal user, on a Ceph client node, check the resynchronization status of the image:rbd mirror image status $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image status $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image status data/image1 --cluster=local
[user@rbd-client ~]$ rbd mirror image status data/image1 --cluster=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow As a normal user, on a Ceph client node, demote the secondary image located on the remote storage cluster:
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image demote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1 --cluster=remote
[user@rbd-client ~]$ rbd mirror image demote data/image1 --cluster=remoteCopy to Clipboard Copied! Toggle word wrap Toggle overflow As a normal user, on a Ceph client node, promote the formerly primary image located on the local storage cluster:
rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAME
rbd mirror image promote $POOL_NAME/$IMAGE_NAME --cluster=$STORAGE_CLUSTER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote data/image1 --cluster=local
$ rbd mirror image promote data/image1 --cluster=localCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Block Storage and Volumes chapter in the Storage Guide for the Red Hat OpenStack Platform.
5.11. Additional Resources 复制链接链接已复制到粘贴板!
See the Installing the Ceph Client Role section in the Red Hat Ceph Storage Installation Guide for more details: