Chapter 9. Benchmarking Performance
The purpose of this section is to give Ceph administrators a basic understanding of Ceph’s native benchmarking tools. These tools will provide some insight into how the Ceph storage cluster is performing. This is not the definitive guide to Ceph performance benchmarking, nor is it a guide on how to tune Ceph accordingly.
9.1. Performance Baseline Copy linkLink copied to clipboard!
The OSD (including the journal) disks and the network throughput should each have a performance baseline to compare against. You can identify potential tuning opportunities by comparing the baseline performance data with the data from Ceph’s native tools. Red Hat Enterprise Linux has many built-in tools, along with a plethora of open source community tools, available to help accomplish these tasks. For more details about some of the available tools, see this Knowledgebase article.
9.2. Storage Cluster Copy linkLink copied to clipboard!
Ceph includes the rados bench
command to do performance benchmarking on a RADOS storage cluster. The command will execute a write test and two types of read tests. The --no-cleanup
option is important to use when testing both read and write performance. By default the rados bench
command will delete the objects it has written to the storage pool. Leaving behind these objects allows the two read tests to measure sequential and random read performance.
Before running these performance tests, drop all the file system caches by running the following:
echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
# echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
Create a new storage pool:
ceph osd pool create testbench 100 100
# ceph osd pool create testbench 100 100
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute a write test for 10 seconds to the newly created storage pool:
rados bench -p testbench 10 write --no-cleanup
# rados bench -p testbench 10 write --no-cleanup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute a sequential read test for 10 seconds to the storage pool:
rados bench -p testbench 10 seq
# rados bench -p testbench 10 seq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute a random read test for 10 seconds to the storage pool:
rados bench -p testbench 10 rand
# rados bench -p testbench 10 rand
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To increase the number of concurrent reads and writes, use the
-t
option, which the default is 16 threads. Also, the-b
parameter can adjust the size of the object being written. The default object size is 4MB. A safe maximum object size is 16MB. Red Hat recommends running multiple copies of these benchmark tests to different pools. Doing this shows the changes in performance from multiple clients.Add the
--run-name <label>
option to control the names of the objects that get written during the benchmark test. Multiplerados bench
commands may be ran simultaneously by changing the--run-name
label for each running command instance. This prevents potential I/O errors that can occur when multiple clients are trying to access the same object and allows for different clients to access different objects. The--run-name
option is also useful when trying to simulate a real world workload. For example:rados bench -p testbench 10 write -t 4 --run-name client1
# rados bench -p testbench 10 write -t 4 --run-name client1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the data created by the
rados bench
command:rados -p testbench cleanup
# rados -p testbench cleanup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Block Device Copy linkLink copied to clipboard!
Ceph includes the rbd bench-write
command to test sequential writes to the block device measuring throughput and latency. The default byte size is 4096, the default number of I/O threads is 16, and the default total number of bytes to write is 1 GB. These defaults can be modified by the --io-size
, --io-threads
and --io-total
options respectively. For more information on the rbd
command, see the Red Hat Ceph Storage 2 Block Device Guide.
Creating a Ceph Block Device
As
root
, load therbd
kernel module, if not already loaded:modprobe rbd
# modprobe rbd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create a 1 GBrbd
image file in thetestbench
pool:rbd create image01 --size 1024 --pool testbench
# rbd create image01 --size 1024 --pool testbench
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen creating a block device image these features are enabled by default:
layering
,object-map
,deep-flatten
,journaling
,exclusive-lock
, andfast-diff
.On Red Hat Enterprise Linux 7.2 and Ubuntu 16.04, users utilizing the kernel RBD client will not be able to map the block device image. You must first disable all these features, except,
layering
.Syntax
rbd feature disable <image_name> <feature_name>
# rbd feature disable <image_name> <feature_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd feature disable image1 journaling deep-flatten exclusive-lock fast-diff object-map
# rbd feature disable image1 journaling deep-flatten exclusive-lock fast-diff object-map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the
--image-feature layering
option on therbd create
command will only enablelayering
on newly created block device images.This is a known issue, see the Red Hat Ceph Storage 2.0 Release Notes for more details.
All these features will work for users utilizing the user-space RBD client to access the block device images.
As
root
, map the image file to a device file:rbd map image01 --pool testbench --name client.admin
# rbd map image01 --pool testbench --name client.admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create anext4
file system on the block device:mkfs -t ext4 -m0 /dev/rbd/testbench/image01
# mkfs -t ext4 -m0 /dev/rbd/testbench/image01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create a new directory:mkdir /mnt/ceph-block-device
# mkdir /mnt/ceph-block-device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, mount the block device under/mnt/ceph-block-device/
:mount /dev/rbd/testbench/image01 /mnt/ceph-block-device
# mount /dev/rbd/testbench/image01 /mnt/ceph-block-device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the write performance test against the block device
rbd bench-write image01 --pool=testbench
# rbd bench-write image01 --pool=testbench
Example