aboutsummaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorPeter Lieven <pl@kamp.de>2014-05-18 00:58:19 +0200
committerKevin Wolf <kwolf@redhat.com>2014-05-19 13:42:27 +0200
commit465bee1da82e43f18d10c43cc7566d0284ad13a9 (patch)
treec98c26268e8f10a77bfc39abb72f72f8c7ac9ee8 /include
parent82a402e99f3f8c6177528ad6d561bf07ff6ee606 (diff)
block: optimize zero writes with bdrv_write_zeroes
this patch tries to optimize zero write requests by automatically using bdrv_write_zeroes if it is supported by the format. This significantly speeds up file system initialization and should speed zero write test used to test backend storage performance. I ran the following 2 tests on my internal SSD with a 50G QCOW2 container and on an attached iSCSI storage. a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX QCOW2 [off] [on] [unmap] ----- runtime: 14secs 1.1secs 1.1secs filesize: 937M 18M 18M iSCSI [off] [on] [unmap] ---- runtime: 9.3s 0.9s 0.9s b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct QCOW2 [off] [on] [unmap] ----- runtime: 246secs 18secs 18secs filesize: 51G 192K 192K throughput: 203M/s 2.3G/s 2.3G/s iSCSI* [off] [on] [unmap] ---- runtime: 8mins 45secs 33secs throughput: 106M/s 1.2G/s 1.6G/s allocated: 100% 100% 0% * The storage was connected via an 1Gbit interface. It seems to internally handle writing zeroes via WRITESAME16 very fast. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'include')
-rw-r--r--include/block/block_int.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 9ffcb698d0..b8cc926bfe 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -364,6 +364,7 @@ struct BlockDriverState {
BlockJob *job;
QDict *options;
+ BlockdevDetectZeroesOptions detect_zeroes;
};
int get_tmp_filename(char *filename, int size);