Questo contenuto non è disponibile nella lingua selezionata.
Chapter 9. Benchmarking Performance
The purpose of this section is to give Ceph administrators a basic understanding of Ceph’s native benchmarking tools. These tools will provide some insight into how the Ceph storage cluster is performing. This is not the definitive guide to Ceph performance benchmarking, nor is it a guide on how to tune Ceph accordingly.
9.1. Performance Baseline Copia collegamentoCollegamento copiato negli appunti!
The OSD (including the journal) disks and the network throughput should each have a performance baseline to compare against. You can identify potential tuning opportunities by comparing the baseline performance data with the data from Ceph’s native tools. Red Hat Enterprise Linux has many built-in tools, along with a plethora of open source community tools, available to help accomplish these tasks. For more details about some of the available tools, please view Red Hat’s Knowledgebase article on the subject.
9.2. Storage Cluster Copia collegamentoCollegamento copiato negli appunti!
Ceph includes the rados bench
command to do performance benchmarking on a RADOS storage cluster. The command will execute a write test and two types of read tests. The --no-cleanup
option is important to use when testing both read and write performance. By default the rados bench
command will delete the objects it has written to the storage pool. Leaving behind these objects allows the two read tests to measure sequential and random read performance.
Create a new storage pool:
ceph osd pool create testbench 100 100
ceph osd pool create testbench 100 100
Before running these performance tests, drop all the file system caches by running the following:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
Execute a write test for 10 seconds to the newly created storage pool:
rados bench -p testbench 10 write --no-cleanup
rados bench -p testbench 10 write --no-cleanup
Example output:
Execute a sequential read test for 10 seconds to the storage pool:
rados bench -p testbench 10 seq
rados bench -p testbench 10 seq
Example output:
Execute a random read test for 10 seconds to the storage pool:
rados bench -p testbench 10 rand
rados bench -p testbench 10 rand
Example output:
To increase the number of concurrent reads and writes, use the -t
option, which the default is 16 threads. Also, the -b
parameter can adjust the size of the object being written. The default object size is 4MB. Red Hat recommends running multiple copies of these benchmark tests to different pools. Doing this shows the changes in performance from multiple clients.
Add the --run-name <label>
option to control the names of the objects that get written during the benchmark test. Multiple rados bench
commands may be ran simultaneously by changing the --run-name
label for each running command instance. This prevents potential I/O errors that can occur when multiple clients are trying to access the same object and allows for different clients to access different objects. The --run-name
option is also useful when trying to simulate a real world workload. For example:
rados bench -p testbench 10 write -t 4 --run-name client1
rados bench -p testbench 10 write -t 4 --run-name client1
Example output:
Remove the data created by the rados bench
command:
rados -p testbench cleanup
rados -p testbench cleanup
9.3. Block Device Copia collegamentoCollegamento copiato negli appunti!
Ceph includes the rbd bench-write
command to test sequential writes to the block device measuring throughput and latency. The default byte size is 4096, the default number of I/O threads is 16, and the default total number of bytes to write is 1 GB. These defaults can be modified by the --io-size
, --io-threads
and --io-total
options respectively. For more information on the rbd
command see the Ceph Block Device Guide.
Creating a Ceph Block Device
Load the
rbd
kernel module, if not already loaded:sudo modprobe rbd
sudo modprobe rbd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a 1 GB
rbd
image file in thetestbench
pool:sudo rbd create image01 --size 1024 --pool testbench
sudo rbd create image01 --size 1024 --pool testbench
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Map the image file to a device file:
sudo rbd map image01 --pool testbench --name client.admin
sudo rbd map image01 --pool testbench --name client.admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
ext4
file system on the block device:sudo mkfs -t ext4 -m0 /dev/rbd/testbench/image01
sudo mkfs -t ext4 -m0 /dev/rbd/testbench/image01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new directory:
sudo mkdir /mnt/ceph-block-device
sudo mkdir /mnt/ceph-block-device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the block device under
/mnt/ceph-block-device/
:sudo mount /dev/rbd/testbench/image01 /mnt/ceph-block-device
sudo mount /dev/rbd/testbench/image01 /mnt/ceph-block-device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the write performance test against the block device:
rbd bench-write image01 --pool=testbench
rbd bench-write image01 --pool=testbench
Example output: