diff options
author | Eric Blake <eblake@redhat.com> | 2017-07-07 07:44:57 -0500 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2017-07-10 13:18:07 +0200 |
commit | d6a644bbfef81bb6c7ab11656ad71e326f75ac77 (patch) | |
tree | 51e6a34d23229e6b01bf0c1fb81d6782854a07e8 /migration | |
parent | 6f8e35e2414433a56b4bd548b87b8ac2aedecb77 (diff) |
block: Make bdrv_is_allocated() byte-based
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the signature of the function to use int64_t *pnum ensures
that the compiler enforces that all callers are updated. For now,
the io.c layer still assert()s that all callers are sector-aligned
on input and that *pnum is sector-aligned on return to the caller,
but that can be relaxed when a later patch implements byte-based
block status. Therefore, this code adds usages like
DIV_ROUND_UP(,BDRV_SECTOR_SIZE) to callers that still want aligned
values, where the call might reasonbly give non-aligned results
in the future; on the other hand, no rounding is needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess with sectors; also, it is now possible to pass
NULL if the caller does not care how much of the image is allocated
beyond the initial offset. Leave comments where we can further
simplify once a later patch eliminates the need for sector-aligned
requests through bdrv_is_allocated().
For ease of review, bdrv_is_allocated_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'migration')
-rw-r--r-- | migration/block.c | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/migration/block.c b/migration/block.c index 7674ae1078..86c0b96cd1 100644 --- a/migration/block.c +++ b/migration/block.c @@ -34,7 +34,7 @@ #define BLK_MIG_FLAG_PROGRESS 0x04 #define BLK_MIG_FLAG_ZERO_BLOCK 0x08 -#define MAX_IS_ALLOCATED_SEARCH 65536 +#define MAX_IS_ALLOCATED_SEARCH (65536 * BDRV_SECTOR_SIZE) #define MAX_INFLIGHT_IO 512 @@ -267,16 +267,20 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds) BlockBackend *bb = bmds->blk; BlkMigBlock *blk; int nr_sectors; + int64_t count; if (bmds->shared_base) { qemu_mutex_lock_iothread(); aio_context_acquire(blk_get_aio_context(bb)); - /* Skip unallocated sectors; intentionally treats failure as - * an allocated sector */ + /* Skip unallocated sectors; intentionally treats failure or + * partial sector as an allocated sector */ while (cur_sector < total_sectors && - !bdrv_is_allocated(blk_bs(bb), cur_sector, - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) { - cur_sector += nr_sectors; + !bdrv_is_allocated(blk_bs(bb), cur_sector * BDRV_SECTOR_SIZE, + MAX_IS_ALLOCATED_SEARCH, &count)) { + if (count < BDRV_SECTOR_SIZE) { + break; + } + cur_sector += count >> BDRV_SECTOR_BITS; } aio_context_release(blk_get_aio_context(bb)); qemu_mutex_unlock_iothread(); |