此内容没有您所选择的语言版本。
Chapter 11. OSD BlueStore (Technology Preview)
OSD BlueStore is a new back end for the OSD daemons. Compared to the currently used FileStore back end, BlueStore allows for storing objects directly on the Ceph block devices without any file system interface.
BlueStore is provided as a Technology Preview only in Red Hat Ceph Storage 2. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
See the support scope for Red Hat Technology Preview features for more details.
Also, note that it will not be possible to preserve data when updating BlueStore OSD nodes to future versions of Red Hat Ceph Storage, because the on-disk data format is undergoing rapid development. In this release, BlueStore is provided mainly to benchmark BlueStore OSDs and Red Hat does not recommend storing any important data on OSD nodes with the BlueStore back end.
BlueStore is generally available and ready for production with Red Hat Ceph Storage 3.2. In addition, BlueStore is the default back end for any newly installed clusters using the Red Hat Ceph Storage 3.2 and further versions. For details, see the BlueStore chapter in the Red Hat Ceph Storage 3 Administration Guide.
BlueStore stores the OSD metadata in the RocksDB key-value database that contains:
- object metadata
- write-ahead log (WAL)
-
Ceph
omap
data - allocator metadata
BlueStore includes the following features and enhancements:
- No large double-writes
- BlueStore first writes any new data to unallocated space on a block device, and then commits a RocksDB transaction that updates the object metadata to reference the new region of the disk. Only when the write operation is below a configurable size threshold, it falls back to a write-ahead journaling scheme, similar to what is used now.
- Multi-device support
BlueStore can use multiple block devices for storing different data, for example: Hard Disk Drive (HDD) for the data Solid-state Drive (SSD) for metadata Non-volatile Memory (NVM) or Non-volatile random-access memory (NVRAM) or persistent memory for the RocksDB write-ahead log (WAL).
NoteThe
ceph-disk
utility does not yet provision multiple devices. To use multiple devices, OSDs must be set up manually.- Efficient block device usage
- Because BlueStore does not use any file system, it minimizes the need to clear the storage device cache.
- Flexible allocator
- The block allocation policy is pluggable, allowing BlueStore to implement different policies for different types of storage devices. There is a different behavior for hard disks and SSDs.
Adding a new Ceph OSD node with the BlueStore back end
To install a new Ceph OSD node with the BlueStore back end by using the Ansible automation application:
Add a new OSD node to the
/etc/ansible/hosts
file under the[osds]
section, for example:[osds] <osd_host_name>
[osds] <osd_host_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details, see Before You Start….
Append the following settings the
group_vars/all
file:osd_objectstore: bluestore ceph_conf_overrides: global: enable experimental unrecoverable data corrupting features: 'bluestore rocksdb'
osd_objectstore: bluestore ceph_conf_overrides: global: enable experimental unrecoverable data corrupting features: 'bluestore rocksdb'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following setting to the
group_vars/osds
file:bluestore: true
bluestore: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
ansible-playbook
:ansible playbook site.yml
ansible playbook site.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of the Ceph cluster. The output will include the following warning message:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow