diff options
author | Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> | 2019-08-06 18:26:11 +0300 |
---|---|---|
committer | John Snow <jsnow@redhat.com> | 2019-10-17 17:02:32 -0400 |
commit | 48557b138383aaf69c2617ca9a88bfb394fc50ec (patch) | |
tree | 7344a7f905f1fd6d10ae34efe8e43335032f519b /util | |
parent | f22f553efffd083ff624be116726f843a39f1148 (diff) |
util/hbitmap: strict hbitmap_reset
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.
Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20190806152611.280389-1-vsementsov@virtuozzo.com>
[Maintainer edit: Max's suggestions from on-list. --js]
[Maintainer edit: Eric's suggestion for aligned macro. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
Diffstat (limited to 'util')
-rw-r--r-- | util/hbitmap.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/util/hbitmap.c b/util/hbitmap.c index fd44c897ab..66db87c6ff 100644 --- a/util/hbitmap.c +++ b/util/hbitmap.c @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count) /* Compute range in the last layer. */ uint64_t first; uint64_t last = start + count - 1; + uint64_t gran = 1ULL << hb->granularity; + + assert(QEMU_IS_ALIGNED(start, gran)); + assert(QEMU_IS_ALIGNED(count, gran) || (start + count == hb->orig_size)); trace_hbitmap_reset(hb, start, count, start >> hb->granularity, last >> hb->granularity); |