aboutsummaryrefslogtreecommitdiff
path: root/block/backup.c
AgeCommit message (Collapse)Author
2019-01-15Revert "hbitmap: Add @advance param to hbitmap_iter_next()"Vladimir Sementsov-Ogievskiy
This reverts commit a33fbb4f8b64226becf502a123733776ce319b24. The functionality is unused. Note: in addition to automatic revert, drop second parameter in hbitmap_iter_next() call from hbitmap_next_dirty_area() too. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com>
2019-01-15dirty-bitmap: improve bdrv_dirty_bitmap_next_zeroVladimir Sementsov-Ogievskiy
Add bytes parameter to the function, to limit searched range. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-12-14block/backup: drop unused synchronization interfaceVladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-08-31block/backup: make function variables consistently namedJohn Snow
Rename opaque_job to job to be consistent with other job implementations. Rename 'job', the BackupBlockJob object, to 's' to also be consistent. Suggested-by: Eric Blake <eblake@redhat.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-8-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: utilize job_exit shimJohn Snow
Utilize the job_exit shim by not calling job_defer_to_main_loop, and where applicable, converting the deferred callback into the job_exit callback. This converts backup, stream, create, and the unit tests all at once. Most of these jobs do not see any changes to the order in which they clean up their resources, except the test-blockjob-txn test, which now puts down its bs before job_completed is called. This is safe for the same reason the reordering in the mirror job is safe, because job_completed no longer runs under two locks, making the unref safe even if it causes a flush. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-7-jsnow@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: canonize Error objectJohn Snow
Jobs presently use both an Error object in the case of the create job, and char strings in the case of generic errors elsewhere. Unify the two paths as just j->err, and remove the extra argument from job_completed. The integer error code for job_completed is kept for now, to be removed shortly in a separate patch. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20180830015734.19765-3-jsnow@redhat.com [mreitz: Dropped a superfluous g_strdup()] Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-08-31jobs: change start callback to run callbackJohn Snow
Presently we codify the entry point for a job as the "start" callback, but a more apt name would be "run" to clarify the idea that when this function returns we consider the job to have "finished," except for any cleanup which occurs in separate callbacks later. As part of this clarification, change the signature to include an error object and a return code. The error ptr is not yet used, and the return code while captured, will be overwritten by actions in the job_completed function. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20180830015734.19765-2-jsnow@redhat.com Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-07-23block: Fix typos in comments (found by codespell)Stefan Weil
Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-07-10block/backup: fix fleecing scheme: use serialized writesVladimir Sementsov-Ogievskiy
Fleecing scheme works as follows: we want a kind of temporary snapshot of active drive A. We create temporary image B, with B->backing = A. Then we start backup(sync=none) from A to B. From this point, B reads as point-in-time snapshot of A (A continues to be active drive, accepting guest IO). This scheme needs some additional synchronization between reads from B and backup COW operations, otherwise, the following situation is theoretically possible: (assume B is qcow2, client is NBD client, reading from B) 1. client starts reading and take qcow2 mutex in qcow2_co_preadv, and goes up to l2 table loading (assume cache miss) 2) guest write => backup COW => qcow2 write => try to take qcow2 mutex => waiting 3. l2 table loaded, we see that cluster is UNALLOCATED, go to "case QCOW2_CLUSTER_UNALLOCATED" and unlock mutex before bdrv_co_preadv(bs->backing, ...) 4) aha, mutex unlocked, backup COW continues, and we finally finish guest write and change cluster in our active disk A 5. actually, do bdrv_co_preadv(bs->backing, ...) and read _new updated_ data. To avoid this, let's make backup writes serializing, to not intersect with reads from B. Note: we expand range of handled cases from (sync=none and B->backing = A) to just (A in backing chain of B), to finally allow safe reading from B during backup for all cases when A in backing chain of B, i.e. B formally looks like point-in-time snapshot of A. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-07-10block: split flags in copy_rangeVladimir Sementsov-Ogievskiy
Pass read flags and write flags separately. This is needed to handle coming BDRV_REQ_NO_SERIALISING clearly in following patches. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-07-02backup: Use copy offloadingFam Zheng
The implementation is similar to the 'qemu-img convert'. In the beginning of the job, offloaded copy is attempted. If it fails, further I/O will go through the existing bounce buffer code path. Then, as Kevin pointed out, both this and qemu-img convert can benefit from a local check if one request fails because of, for example, the offset is beyond EOF, but another may well be accepted by the protocol layer. This will be implemented separately. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 20180703023758.14422-4-famz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2018-06-18hbitmap: Add @advance param to hbitmap_iter_next()Max Reitz
This new parameter allows the caller to just query the next dirty position without moving the iterator. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20180613181823.13618-8-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2018-05-30job: Add error message for failing jobsKevin Wolf
So far we relied on job->ret and strerror() to produce an error message for failed jobs. Not surprisingly, this tends to result in completely useless messages. This adds a Job.error field that can contain an error string for a failing job, and a parameter to job_completed() that sets the field. As a default, if NULL is passed, we continue to use strerror(job->ret). All existing callers are changed to pass NULL. They can be improved in separate patches. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2018-05-23job: Move progress fields to JobKevin Wolf
BlockJob has fields .offset and .len, which are actually misnomers today because they are no longer tied to block device sizes, but just progress counters. As such they make a lot of sense in generic Jobs. This patch moves the fields to Job and renames them to .progress_current and .progress_total to describe their function better. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_yield()Kevin Wolf
This moves block_job_yield() to the Job layer. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move completion and cancellation to JobKevin Wolf
This moves the top-level job completion and cancellation functions from BlockJob to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-05-23job: Switch transactions to JobTxnKevin Wolf
This doesn't actually move any transaction code to Job yet, but it renames the type for transactions from BlockJobTxn to JobTxn and makes them contain Jobs rather than BlockJobs Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Add job_drain()Kevin Wolf
block_job_drain() contains a blk_drain() call which cannot be moved to Job, so add a new JobDriver callback JobDriver.drain which has a common implementation for all BlockJobs. In addition to this we keep the existing BlockJobDriver.drain callback that is called by the common drain implementation for all block jobs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move single job finalisation to JobKevin Wolf
This moves the finalisation of a single job from BlockJob to Job. Some part of this code depends on job transactions, and job transactions call this code, we introduce some temporary calls from Job functions to BlockJob ones. This will be fixed once transactions move to Job, too. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move pause/resume functions to JobKevin Wolf
While we already moved the state related to job pausing to Job, the functions to do were still BlockJob only. This commit moves them over to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add job_sleep_ns()Kevin Wolf
There is nothing block layer specific about block_job_sleep_ns(), so move the function to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2018-05-23job: Move coroutine and related code to JobKevin Wolf
This commit moves some core functions for dealing with the job coroutine from BlockJob to Job. This includes primarily entering the coroutine (both for the first and reentering) and yielding explicitly and at pause points. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move defer_to_main_loop to JobKevin Wolf
Move the defer_to_main_loop functionality from BlockJob to Job. The code can be simplified because we can use job->aio_context in job_defer_to_main_loop_bh() now, instead of having to access the BlockDriverState. Probably taking the data->aio_context lock in addition was already unnecessary in the old code because we didn't actually make use of anything protected by the old AioContext except getting the new AioContext, in case it changed between scheduling the BH and running it. But it's certainly unnecessary now that the BDS isn't accessed at all any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Move cancelled to JobKevin Wolf
We cannot yet move the whole logic around job cancelling to Job because it depends on quite a few other things that are still only in BlockJob, but we can move the cancelled field at least. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add reference countingKevin Wolf
This moves reference counting from BlockJob to Job. In order to keep calling the BlockJob cleanup code when the job is deleted via job_unref(), introduce a new JobDriver.free callback. Every block job must use block_job_free() for this callback, this is asserted in block_job_create(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Add JobDriver.job_typeKevin Wolf
This moves the job_type field from BlockJobDriver to JobDriver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Rename BlockJobType into JobTypeKevin Wolf
QAPI types aren't externally visible, so we can rename them without causing problems. Before we add a job type to Job, rename the enum so it can be used for more than just block jobs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-23job: Create Job, JobDriver and job_create()Kevin Wolf
This is the first step towards creating an infrastructure for generic background jobs that aren't tied to a block device. For now, Job only stores its ID and JobDriver, the rest stays in BlockJob. The following patches will move over more parts of BlockJob to Job if they are meaningful outside the context of a block job. BlockJob.driver is now redundant, but this patch leaves it around to avoid unnecessary churn. The next patches will get rid of almost all of its uses anyway so that it can be removed later with much less churn. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-15blockjob: Add block_job_driver()Kevin Wolf
The backup block job directly accesses the driver field in BlockJob. Add a wrapper for getting it. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-15blockjob: Introduce block_job_ratelimit_get_delay()Kevin Wolf
This gets us rid of more direct accesses to BlockJob fields from the job drivers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-15blockjob: Implement block_job_set_speed() centrallyKevin Wolf
All block job drivers support .set_speed and all of them duplicate the same code to implement it. Move that code to blockjob.c and remove the now useless callback. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-15blockjob: Move RateLimit to BlockJobKevin Wolf
Every block job has a RateLimit, and they all do the exact same thing with it, so it should be common infrastructure. Move the struct field for a start. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-05-15blockjob: Wrappers for progress counter accessKevin Wolf
Block job drivers are not expected to mess with the internals of the BlockJob object, so provide wrapper functions for one of the cases where they still do it: Updating the progress counter. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
2018-03-19blockjobs: ensure abort is called for cancelled jobsJohn Snow
Presently, even if a job is canceled post-completion as a result of a failing peer in a transaction, it will still call .commit because nothing has updated or changed its return code. The reason why this does not cause problems currently is because backup's implementation of .commit checks for cancellation itself. I'd like to simplify this contract: (1) Abort is called if the job/transaction fails (2) Commit is called if the job/transaction succeeds To this end: A job's return code, if 0, will be forcibly set as -ECANCELED if that job has already concluded. Remove the now redundant check in the backup job implementation. We need to check for cancellation in both block_job_completed AND block_job_completed_single, because jobs may be cancelled between those two calls; for instance in transactions. This also necessitates an ABORTING -> ABORTING transition to be allowed. The check in block_job_completed could be removed, but there's no point in starting to attempt to succeed a transaction that we know in advance will fail. This does NOT affect mirror jobs that are "canceled" during their synchronous phase. The mirror job itself forcibly sets the canceled property to false prior to ceding control, so such cases will invoke the "commit" callback. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-03-19blockjobs: model single jobs as transactionsJohn Snow
model all independent jobs as single job transactions. It's one less case we have to worry about when we add more states to the transition machine. This way, we can just treat all job lifetimes exactly the same. This helps tighten assertions of the STM graph and removes some conditionals that would have been needed in the coming commits adding a more explicit job lifetime management API. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-12-18backup: use copy_bitmap in incremental backupVladimir Sementsov-Ogievskiy
We can use copy_bitmap instead of sync_bitmap. copy_bitmap is initialized from sync_bitmap and it is more informative: we will not try to process data, that is already in progress (by write notifier). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20171012135313.227864-6-vsementsov@virtuozzo.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-12-18backup: simplify non-dirty bits progress processingVladimir Sementsov-Ogievskiy
Set fake progress for non-dirty clusters in copy_bitmap initialization, to. It simplifies code and allows further refactoring. This patch changes user's view of backup progress, but formally it doesn't changed: progress hops are just moved to the beginning. Actually it's just a point of view: when do we actually skip clusters? We can say in the very beginning, that we skip these clusters and do not think about them later. Of course, if go through disk sequentially, it's logical to say, that we skip clusters between copied portions to the left and to the right of them. But even now copying progress is not sequential because of write notifiers. Future patches will introduce new backup architecture which will do copying in several coroutines in parallel, so it will make no sense to publish fake progress by parts in parallel with other copying requests. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20171012135313.227864-5-vsementsov@virtuozzo.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-12-18backup: init copy_bitmap from sync_bitmap for incrementalVladimir Sementsov-Ogievskiy
We should not copy non-dirty clusters in write notifiers. So, initialize copy_bitmap from sync_bitmap. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20171012135313.227864-4-vsementsov@virtuozzo.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-12-18backup: move from done_bitmap to copy_bitmapVladimir Sementsov-Ogievskiy
Use HBitmap copy_bitmap instead of done_bitmap. This is needed to improve incremental backup in following patches and to unify backup loop for full/incremental modes in future patches. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20171012135313.227864-3-vsementsov@virtuozzo.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2017-11-29blockjob: remove clock argument from block_job_sleep_nsPaolo Bonzini
All callers are using QEMU_CLOCK_REALTIME, and it will not be possible to support more than one clock when block_job_sleep_ns switches to a single timer stored in the BlockJob struct. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Tested-By: Jeff Cody <jcody@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_dirty_iter_next() to report byte offsetEric Blake
Thanks to recent cleanups, most callers were scaling a return value of sectors into bytes (the exception, in qcow2-bitmap, will be converted to byte-based iteration later). Update the interface to do the scaling internally instead. In qcow2-bitmap, the code was specifically checking for an error return of -1. To avoid a regression, we either have to make sure we continue to return -1 (rather than a scaled -512) on error, or we have to fix the caller to treat all negative values as error rather than just one magic value. It's easy enough to make both changes at the same time, even though either one in isolation would work. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Set iterator start by offset, not sectorEric Blake
All callers to bdrv_dirty_iter_new() passed 0 for their initial starting point, drop that parameter. Most callers to bdrv_set_dirty_iter() were scaling a byte offset to a sector number; the exception qcow2-bitmap will be converted later to use byte rather than sector iteration. Move the scaling to occur internally to dirty bitmap code instead, so that callers now pass in bytes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-09-04qapi: Mechanically convert FOO_lookup[...] to FOO_str(...)Markus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <1503564371-26090-14-git-send-email-armbru@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
2017-07-13Convert error_report() to warn_report()Alistair Francis
Convert all uses of error_report("warning:"... to use warn_report() instead. This helps standardise on a single method of printing warnings to the user. All of the warnings were changed using these two commands: find ./* -type f -exec sed -i \ 's|error_report(".*warning[,:] |warn_report("|Ig' {} + Indentation fixed up manually afterwards. The test-qdev-global-props test case was manually updated to ensure that this patch passes make check (as the test cases are case sensitive). Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Suggested-by: Thomas Huth <thuth@redhat.com> Cc: Jeff Cody <jcody@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Cc: Max Reitz <mreitz@redhat.com> Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Lieven <pl@kamp.de> Cc: Josh Durgin <jdurgin@redhat.com> Cc: "Richard W.M. Jones" <rjones@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: Peter Crosthwaite <crosthwaite.peter@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Greg Kurz <groug@kaod.org> Cc: Rob Herring <robh@kernel.org> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Peter Chubb <peter.chubb@nicta.com.au> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Marcel Apfelbaum <marcel@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Alexander Graf <agraf@suse.de> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Greg Kurz <groug@kaod.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed by: Peter Chubb <peter.chubb@data61.csiro.au> Acked-by: Max Reitz <mreitz@redhat.com> Acked-by: Marcel Apfelbaum <marcel@redhat.com> Message-Id: <e1cfa2cd47087c248dd24caca9c33d9af0c499b0.1499866456.git.alistair.francis@xilinx.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2017-07-10block: Make bdrv_is_allocated() byte-basedEric Blake
We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the signature of the function to use int64_t *pnum ensures that the compiler enforces that all callers are updated. For now, the io.c layer still assert()s that all callers are sector-aligned on input and that *pnum is sector-aligned on return to the caller, but that can be relaxed when a later patch implements byte-based block status. Therefore, this code adds usages like DIV_ROUND_UP(,BDRV_SECTOR_SIZE) to callers that still want aligned values, where the call might reasonbly give non-aligned results in the future; on the other hand, no rounding is needed for callers that should just continue to work with byte alignment. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_is_allocated(). But some code, particularly bdrv_commit(), gets a lot simpler because it no longer has to mess with sectors; also, it is now possible to pass NULL if the caller does not care how much of the image is allocated beyond the initial offset. Leave comments where we can further simplify once a later patch eliminates the need for sector-aligned requests through bdrv_is_allocated(). For ease of review, bdrv_is_allocated_above() will be tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-07-10backup: Switch backup_run() to byte-basedEric Blake
We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Change the internal loop iteration of backups to track by bytes instead of sectors (although we are still guaranteed that we iterate by steps that are cluster-aligned). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-07-10backup: Switch backup_do_cow() to byte-basedEric Blake
We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal function (no semantic change). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-07-10backup: Switch block_backup.h to byte-basedEric Blake
We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Continue by converting the public interface to backup jobs (no semantic change), including a change to CowRequest to track by bytes instead of cluster indices. Note that this does not change the difference between the public interface (starting point, and size of the subsequent range) and the internal interface (starting and end points). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Xie Changlong <xiechanglong@cmss.chinamobile.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-07-10backup: Switch BackupBlockJob to byte-basedEric Blake
We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Continue by converting an internal structure (no semantic change), and all references to tracking progress. Drop a redundant local variable bytes_per_cluster. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-07-10trace: Show blockjob actions via bytes, not sectorsEric Blake
Upcoming patches are going to switch to byte-based interfaces instead of sector-based. Even worse, trace_backup_do_cow_enter() had a weird mix of cluster and sector indices. The trace interface is low enough that there are no stability guarantees, and therefore nothing wrong with changing our units, even in cases like trace_backup_do_cow_skip() where we are not changing the trace output. So make the tracing uniformly use bytes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>