Table of Contents
When measuring the impact of using DRBD on a system’s I/O throughput, the absolute throughput the system is capable of is of little relevance. What is much more interesting is the relative impact DRBD has on I/O performance. Thus it is always necessary to measure I/O throughput both with and without DRBD.
![]() | Caution |
---|---|
The tests described in this section are intrusive; they overwrite data and bring DRBD devices out of sync. It is thus vital that you perform them only on scratch volumes which can be discarded after testing has completed. |
I/O throughput estimation works by writing significantly large chunks
of data to a block device, and measuring the amount of time the system
took to complete the write operation. This can be easily done using a
fairly ubiquitous utility, dd
, whose reasonably recent versions
include a built-in throughput estimation.
A simple dd
-based throughput benchmark, assuming you have a scratch
resource named test
which is currently connected and in the
secondary role on both nodes, is one like the following:
# TEST_RESOURCE=test # TEST_DEVICE=$(drbdadm sh-dev $TEST_RESOURCE) # TEST_LL_DEVICE=$(drbdadm sh-ll-dev $TEST_RESOURCE) # drbdadm primary $TEST_RESOURCE # for i in $(seq 5); do dd if=/dev/zero of=$TEST_DEVICE bs=512M count=1 oflag=direct done # drbdadm down $TEST_RESOURCE # for i in $(seq 5); do dd if=/dev/zero of=$TEST_LL_DEVICE bs=512M count=1 oflag=direct done
This test simply writes a 512M chunk of data to your DRBD device, and
then to its backing device for comparison. Both tests are repeated 5
times each to allow for some statistical averaging. The relevant
result is the throughput measurements generated by dd
.
![]() | Note |
---|---|
For freshly enabled DRBD devices, it is normal to see
significantly reduced performance on the first |