이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Configuring OpenStack to Use Ceph
3.1. Configuring Cinder 링크 복사링크가 클립보드에 복사되었습니다!
				The cinder-volume nodes require the Ceph block device driver, the volume pool, the user and the UUID of the secret to interact with Ceph block devices. To configure Cinder, perform the following steps:
			
Open the Cinder configuration file.
vim /etc/cinder/cinder.conf
# vim /etc/cinder/cinder.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
[DEFAULT]section, enable Ceph as a backend for Cinder.enabled_backends = ceph
enabled_backends = cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends, theglance_api_version = 2setting must be in the[DEFAULT]section and not the[ceph]section.glance_api_version = 2
glance_api_version = 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 
						Create a 
[ceph]section in thecinder.conffile. Add the Ceph settings in the following steps under the[ceph]section. Specify the
volume_driversetting and set it to use the Ceph block device driver. For example:volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_driver = cinder.volume.drivers.rbd.RBDDriverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
cephand a Ceph configuration file at/etc/ceph/ceph.conf. If the Ceph cluster name is notceph, specify the cluster name and configuration file path appropriately. For example:rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow By default, OSP stores Ceph volumes in the
rbdpool. To use thevolumespool created earlier, specify therbd_poolsetting and set thevolumespool. For example:rbd_pool = volumes
rbd_pool = volumesCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSP does not have a default user name or a UUID of the secret for volumes. Specify
rbd_userand set it to thecinderuser. Then, specify therbd_secret_uuidsetting and set it to the generated UUID stored in theuuid-secret.txtfile. For example:rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the following settings:
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1Copy to Clipboard Copied! Toggle word wrap Toggle overflow 
The resulting configuration should look something like this:
					Consider removing the default [lvm] section and its settings.
				
3.2. Configuring Cinder Backup 링크 복사링크가 클립보드에 복사되었습니다!
				The cinder-backup node requires a specific daemon. To configure Cinder backup, perform the following steps:
			
Open the Cinder configuration file.
vim /etc/cinder/cinder.conf
# vim /etc/cinder/cinder.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 
						Go to the 
[ceph]section of the configuration file. Specify the
backup_driversetting and set it to the Ceph driver.backup_driver = cinder.backup.drivers.ceph
backup_driver = cinder.backup.drivers.cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
backup_ceph_confsetting and specify the path to the Ceph configuration file.backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_conf = /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it may point to a different Ceph cluster.
Specify the Ceph pool for backups.
backup_ceph_pool = backups
backup_ceph_pool = backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile it is possible to use the same pool for Cinder Backups as used with Cinder, it is NOT recommended. Consider using a pool with a different CRUSH hierarchy.
Specify the
backup_ceph_usersetting and specify the user ascinder-backup.backup_ceph_user = cinder-backup
backup_ceph_user = cinder-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the following settings:
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
				With the Cinder settings included, the [ceph] section of the cinder.conf file should look something like this:
			
				Check to see if Cinder backup is enabled under /etc/openstack-dashboard/. The setting should be in a file called local_settings, or local_settings.py. For example:
			
cat /etc/openstack-dashboard/local_settings | grep enable_backup
cat  /etc/openstack-dashboard/local_settings | grep enable_backup
				If enable_backup is set to False, set it to True. For example:
			
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}
3.3. Configuring Glance 링크 복사링크가 클립보드에 복사되었습니다!
				To use Ceph block devices by default, edit the /etc/glance/glance-api.conf file. Uncomment the following settings if necessary and change their values accordingly. If you used different pool, user or Ceph configuration file settings apply the appropriate values.
			
vim /etc/glance/glance-api.conf
# vim /etc/glance/glance-api.conf
				To enable copy-on-write (CoW) cloning set show_image_direct_url to True.
			
show_image_direct_url = True
show_image_direct_url = True
Enabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
				Disable cache management if necessary. The flavor should be set to keystone only, not keystone+cachemanagement.
			
flavor = keystone
flavor = keystone
Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
hw_scsi_model=virtio-scsi
hw_disk_bus=scsi
hw_qemu_guest_agent=yes
os_require_quiesce=yes
				The virtio-scsi controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw calls through the QEMU guest agent.
			
3.4. Configuring Nova 링크 복사링크가 클립보드에 복사되었습니다!
				On every nova-compute node, edit the Ceph configuration file to configure the ephemeral backend for Nova and to boot all the virtual machines directly into Ceph.
			
Open the Ceph configuration file.
vim /etc/ceph/ceph.conf
# vim /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following section to the
[client]section of the Ceph configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make directories for the admin socket and log file, and change their permissions to use the
qemuuser andlibvirtdgroup.mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe directories must be allowed by SELinux or AppArmor.
				On every nova-compute node, edit the /etc/nova/nova.conf file under the [libvirt]` section and configure the following settings:
			
				If the Ceph configuration file is not /etc/ceph/ceph.conf, provide the correct path. Replace the UUID in rbd_user_secret with the UUID in the uuid-secret.txt file.
			
3.5. Restarting OpenStack Services 링크 복사링크가 클립보드에 복사되었습니다!
To activate the Ceph block device drivers, load the block device pool names and Ceph user names into the configuration, restart the appropriate OpenStack services after modifying the corresponding configuration files.
systemctl restart openstack-cinder-volume systemctl restart openstack-cinder-backup systemctl restart openstack-glance-api systemctl restart openstack-nova-compute
# systemctl restart openstack-cinder-volume
# systemctl restart openstack-cinder-backup
# systemctl restart openstack-glance-api
# systemctl restart openstack-nova-compute