aboutsummaryrefslogtreecommitdiff
path: root/block/backup.c
AgeCommit message (Collapse)Author
2023-01-20include/block: Untangle inclusion loopsMarkus Armbruster
We have two inclusion loops: block/block.h -> block/block-global-state.h -> block/block-common.h -> block/blockjob.h -> block/block.h block/block.h -> block/block-io.h -> block/block-common.h -> block/blockjob.h -> block/block.h I believe these go back to Emanuele's reorganization of the block API, merged a few months ago in commit d7e2fe4aac8. Fortunately, breaking them is merely a matter of deleting unnecessary includes from headers, and adding them back in places where they are now missing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
2022-10-27backup: remove incorrect coroutine_fn annotationAlberto Faria
The .set_speed callback is not called from coroutine. Signed-off-by: Alberto Faria <afaria@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221013123711.620631-2-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-06-24block: simplify handling of try to merge different sized bitmapsVladimir Sementsov-Ogievskiy
We have too much logic to simply check that bitmaps are of the same size. Let's just define that hbitmap_merge() and bdrv_dirty_bitmap_merge_internal() require their argument bitmaps be of same size, this simplifies things. Let's look through the callers: For backup_init_bcs_bitmap() we already assert that merge can't fail. In bdrv_reclaim_dirty_bitmap_locked() we gracefully handle the error that can't happen: successor always has same size as its parent, drop this logic. In bdrv_merge_dirty_bitmap() we already has assertion and separate check. Make the check explicit and improve error message. Signed-off-by: Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru> Reviewed-by: Nikita Lapshin <nikita.lapshin@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20220517111206.23585-4-v.sementsov-og@mail.ru> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2022-03-04assertions for block_int global state APIEmanuele Giuseppe Esposito
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-13-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2021-10-07jobs: Give Job.force_cancel more meaningHanna Reitz
We largely have two cancel modes for jobs: First, there is actual cancelling. The job is terminated as soon as possible, without trying to reach a consistent result. Second, we have mirror in the READY state. Technically, the job is not really cancelled, but it just is a different completion mode. The job can still run for an indefinite amount of time while it tries to reach a consistent result. We want to be able to clearly distinguish which cancel mode a job is in (when it has been cancelled). We can use Job.force_cancel for this, but right now it only reflects cancel requests from the user with force=true, but clearly, jobs that do not even distinguish between force=false and force=true are effectively always force-cancelled. So this patch has Job.force_cancel signify whether the job will terminate as soon as possible (force_cancel=true) or whether it will effectively remain running despite being "cancelled" (force_cancel=false). To this end, we let jobs that provide JobDriver.cancel() tell the generic job code whether they will terminate as soon as possible or not, and for jobs that do not provide that method we assume they will. Signed-off-by: Hanna Reitz <hreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20211006151940.214590-7-hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-10-05block/backup: avoid integer overflow of `max-workers`Stefano Garzarella
QAPI generates `struct BackupPerf` where `max-workers` value is stored in an `int64_t` variable. But block_copy_async(), and the underlying code, uses an `int` parameter. At the end that variable is used to initialize `max_busy_tasks` in block/aio_task.c causing the following assertion failure if a value greater than INT_MAX(2147483647) is used: ../block/aio_task.c:63: aio_task_pool_wait_one: Assertion `pool->busy_tasks > 0' failed. Let's check that `max-workers` doesn't exceed INT_MAX and print an error in that case. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2009310 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20211005161157.282396-2-sgarzare@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2021-09-01block/copy-before-write: initialize block-copy bitmapVladimir Sementsov-Ogievskiy
We are going to publish copy-before-write filter to be used in separate of backup. Future step would support bitmap for the filter. But let's start from full set bitmap. We have to modify backup, as bitmap is first initialized by copy-before-write filter, and then backup modifies it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-20-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-09-01block/copy-before-write: bdrv_cbw_append(): drop unused compress argVladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-18-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-09-01block/backup: move cluster size calculation to block-copyVladimir Sementsov-Ogievskiy
The main consumer of cluster-size is block-copy. Let's calculate it here instead of passing through backup-top. We are going to publish copy-before-write filter soon, so it will be created through options. But we don't want for now to make explicit option for cluster-size, let's continue to calculate it automatically. So, now is the time to get rid of cluster_size argument for bdrv_cbw_append(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-10-vsementsov@virtuozzo.com> [hreitz: Add qemu/error-report.h include to block/block-copy.c] Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-09-01block/backup: set copy_range and compress after filter insertionVladimir Sementsov-Ogievskiy
We are going to publish copy-before-write filter, so it would be initialized through options. Still we don't want to publish compress and copy-range options, as 1. Modern way to enable compression is to use compress filter. 2. For copy-range it's unclean how to make proper interface: - it's has experimental prefix for backup job anyway - the whole BackupPerf structure doesn't make sense for the filter So, let's just add copy-range possibility to the filter later if needed. Still, we are going to continue support for compression and experimental copy-range in backup job. So, set these options after filter creation. Note, that we can drop "compress" argument of bdrv_cbw_append() now, as well as "perf". The only reason not doing so is that now, when I prepare this patch the big series around it is already reviewed and I want to avoid extra rebase conflicts to simplify review of the following version. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-9-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-09-01block-copy: move detecting fleecing scheme to block-copyVladimir Sementsov-Ogievskiy
We want to simplify initialization interface of copy-before-write filter as we are going to make it public. So, let's detect fleecing scheme exactly in block-copy code, to not pass this information through extra levels. Why not just set BDRV_REQ_SERIALISING unconditionally: because we are going to implement new more efficient fleecing scheme which will not rely on backing feature. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-7-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-09-01block: rename backup-top to copy-before-writeVladimir Sementsov-Ogievskiy
We are going to convert backup_top to full featured public filter, which can be used in separate of backup job. Start from renaming from "how it used" to "what it does". While updating comments in 283 iotest, drop and rephrase also things about ".active", as this field is now dropped, and filter doesn't have "inactive" mode. Note that this change may be considered as incompatible interface change, as backup-top filter format name was visible through query-block and query-named-block-nodes. Still, consider the following reasoning: 1. backup-top was never documented, so if someone depends on format name (for driver that can't be used other than it is automatically inserted on backup job start), it's a kind of "undocumented feature use". So I think we are free to change it. 2. There is a hope, that there is no such users: it's a lot more native to give a good node-name to backup-top filter if need to operate with it somehow, and don't touch format name. 3. Another "incompatible" change in further commit would be moving copy-before-write filter from using backing child to file child. And this is even more reasonable than renaming: for now all public filters are file-child based. So, it's a risky change, but risk seems small and good interface worth it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-6-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
2021-05-14mirror: stop cancelling in-flight requests on non-force cancel in READYVladimir Sementsov-Ogievskiy
If mirror is READY than cancel operation is not discarding the whole result of the operation, but instead it's a documented way get a point-in-time snapshot of source disk. So, we should not cancel any requests if mirror is READ and force=false. Let's fix that case. Note, that bug that we have before this commit is not critical, as the only .bdrv_cancel_in_flight implementation is nbd_cancel_in_flight() and it cancels only requests waiting for reconnection, so it should be rare case. Fixes: 521ff8b779b11c394dbdc43f02e158dd99df308a Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210421075858.40197-1-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2021-03-08backup: Remove nodes from job in .clean()Max Reitz
The block job holds a reference to the backup-top node (because it is passed as the main job BDS to block_job_create()). Therefore, bdrv_backup_top_drop() cannot delete the backup-top node (replacing it by its child does not affect the job parent, because that has .stay_at_node set). That is a problem, because all of its I/O functions assume the BlockCopyState (s->bcs) to be valid and that it has a filtered child; but after bdrv_backup_top_drop(), neither of those things are true. It does not make sense to add new parents to backup-top after backup_clean(), so we should detach it from the job before bdrv_backup_top_drop(). Because there is no function to do that for a single node, just detach all of the job's nodes -- the job does not do anything past backup_clean() anyway. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210219153348.41861-2-mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2021-02-12block/backup: implement .cancel job handlerVladimir Sementsov-Ogievskiy
Cancel in-flight io on target to not waste the time. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20210205163720.887197-10-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
2021-01-26backup: move to block-copyVladimir Sementsov-Ogievskiy
This brings async request handling and block-status driven chunk sizes to backup out of the box, which improves backup performance. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-18-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2021-01-26block/backup: drop extra gotos from backup_run()Vladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-17-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2021-01-26qapi: backup: add max-chunk and max-workers to x-perf structVladimir Sementsov-Ogievskiy
Add new parameters to configure future backup features. The patch doesn't introduce aio backup requests (so we actually have only one worker) neither requests larger than one cluster. Still, formally we satisfy these maximums anyway, so add the parameters now, to facilitate further patch which will really change backup job behavior. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-11-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2021-01-26block/block-copy: add ratelimit to block-copyVladimir Sementsov-Ogievskiy
We are going to directly use one async block-copy operation for backup job, so we need rate limiter. We want to maintain current backup behavior: only background copying is limited and copy-before-write operations only participate in limit calculation. Therefore we need one rate limiter for block-copy state and boolean flag for block-copy call state for actual limitation. Note, that we can't just calculate each chunk in limiter after successful copying: it will not save us from starting a lot of async sub-requests which will exceed limit too much. Instead let's use the following scheme on sub-request creation: 1. If at the moment limit is not exceeded, create the request and account it immediately. 2. If at the moment limit is already exceeded, drop create sub-request and handle limit instead (by sleep). With this approach we'll never exceed the limit more than by one sub-request (which pretty much matches current backup behavior). Note also, that if there is in-flight block-copy async call, block_copy_kick() should be used after set-speed to apply new setup faster. For that block_copy_kick() published in this patch. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-7-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2021-01-26qapi: backup: add perf.use-copy-range parameterVladimir Sementsov-Ogievskiy
Experiments show, that copy_range is not always making things faster. So, to make experimentation simpler, let's add a parameter. Some more perf parameters will be added soon, so here is a new struct. For now, add new backup qmp parameter with x- prefix for the following reasons: - We are going to add more performance parameters, some will be related to the whole block-copy process, some only to background copying in backup (ignored for copy-before-write operations). - On the other hand, we are going to use block-copy interface in other block jobs, which will need performance options as well.. And it should be the same structure or at least somehow related. So, there are too much unclean things about how the interface and now we need the new options mostly for testing. Let's keep them experimental for a while. In do_backup_common() new x-perf parameter handled in a way to make further options addition simpler. We add use-copy-range with default=true, and we'll change the default in further patch, after moving backup to use block-copy. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-2-vsementsov@virtuozzo.com> [mreitz: s/5\.2/6.0/] Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-09-07backup: Deal with filtersMax Reitz
Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08backup: Make sure that source and target size matchKevin Wolf
Since the introduction of a backup filter node in commit 00e30f05d, the backup block job crashes when the target image is smaller than the source image because it will try to write after the end of the target node without having BLK_PERM_RESIZE. (Previously, the BlockBackend layer would have caught this and errored out gracefully.) We can fix this and even do better than the old behaviour: Check that source and target have the same image size at the start of the block job and unshare BLK_PERM_RESIZE. (This permission was already unshared before the same commit 00e30f05d, but the BlockBackend that was used to make the restriction was removed without a replacement.) This will immediately error out when starting the job instead of only when writing to a block that doesn't exist in the target. Longer target than source would technically work because we would never write to blocks that don't exist, but semantically these are invalid, too, because a backup is supposed to create a copy, not just an image that starts with a copy. Fixes: 00e30f05de1d19586345ec373970ef4c192c6270 Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1778593 Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200430142755.315494-4-kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-05-08backup: Improve error for bdrv_getlength() failureKevin Wolf
bdrv_get_device_name() will be an empty string with modern management tools that don't use -drive. Use bdrv_get_device_or_node_name() instead so that the node name is used if the BlockBackend is anonymous. While at it, start with upper case to make the message consistent with the rest of the function. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-Id: <20200430142755.315494-3-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-04-07backup: don't acquire aio_context in backup_cleanStefan Reiter
All code-paths leading to backup_clean (via job_clean) have the job's context already acquired. The job's context is guaranteed to be the same as the one used by backup_top via backup_job_create. Since the previous logic effectively acquired the lock twice, this broke cleanup of backups for disks using IO threads, since the BDRV_POLL_WHILE in bdrv_backup_top_drop -> bdrv_do_drained_begin would only release the lock once, thus deadlocking with the IO thread. This is a partial revert of 0abf2581717a19. Signed-off-by: Stefan Reiter <s.reiter@proxmox.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200407115651.69472-4-s.reiter@proxmox.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2020-03-11block/block-copy: hide structure definitionsVladimir Sementsov-Ogievskiy
Hide structure definitions and add explicit API instead, to keep an eye on the scope of the shared fields. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200311103004.7649-10-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-03-11block/block-copy: fix progress calculationVladimir Sementsov-Ogievskiy
Assume we have two regions, A and B, and region B is in-flight now, region A is not yet touched, but it is unallocated and should be skipped. Correspondingly, as progress we have total = A + B current = 0 If we reset unallocated region A and call progress_reset_callback, it will calculate 0 bytes dirty in the bitmap and call job_progress_set_remaining, which will set total = current + 0 = 0 + 0 = 0 So, B bytes are actually removed from total accounting. When job finishes we'll have total = 0 current = B , which doesn't sound good. This is because we didn't considered in-flight bytes, actually when calculating remaining, we should have set (in_flight + dirty_bytes) as remaining, not only dirty_bytes. To fix it, let's refactor progress calculation, moving it to block-copy itself instead of fixing callback. And, of course, track in_flight bytes count. We still have to keep one callback, to maintain backup job bytes_read calculation, but it will go on soon, when we turn the whole backup process into one block_copy call. Cc: qemu-stable@nongnu.org Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Message-Id: <20200311103004.7649-3-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2020-01-27block/backup-top: Don't acquire context while dropping topSergio Lopez
All paths that lead to bdrv_backup_top_drop(), except for the call from backup_clean(), imply that the BDS AioContext has already been acquired, so doing it there too can potentially lead to QEMU hanging on AIO_WAIT_WHILE(). An easy way to trigger this situation is by issuing a two actions transaction, with a proper and a bogus blockdev-backup, so the second one will trigger a rollback. This will trigger a hang with an stack trace like this one: #0 0x00007fb680c75016 in __GI_ppoll (fds=0x55e74580f7c0, nfds=1, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39 #1 0x000055e743386e09 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77 #2 0x000055e743386e09 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:336 #3 0x000055e743388dc4 in aio_poll (ctx=0x55e7458925d0, blocking=blocking@entry=true) at util/aio-posix.c:669 #4 0x000055e743305dea in bdrv_flush (bs=bs@entry=0x55e74593c0d0) at block/io.c:2878 #5 0x000055e7432be58e in bdrv_close (bs=0x55e74593c0d0) at block.c:4017 #6 0x000055e7432be58e in bdrv_delete (bs=<optimized out>) at block.c:4262 #7 0x000055e7432be58e in bdrv_unref (bs=bs@entry=0x55e74593c0d0) at block.c:5644 #8 0x000055e743316b9b in bdrv_backup_top_drop (bs=bs@entry=0x55e74593c0d0) at block/backup-top.c:273 #9 0x000055e74331461f in backup_job_create (job_id=0x0, bs=bs@entry=0x55e7458d5820, target=target@entry=0x55e74589f640, speed=0, sync_mode=MIRROR_SYNC_MODE_FULL, sync_bitmap=sync_bitmap@entry=0x0, bitmap_mode=BITMAP_SYNC_MODE_ON_SUCCESS, compress=false, filter_node_name=0x0, on_source_error=BLOCKDEV_ON_ERROR_REPORT, on_target_error=BLOCKDEV_ON_ERROR_REPORT, creation_flags=0, cb=0x0, opaque=0x0, txn=0x0, errp=0x7ffddfd1efb0) at block/backup.c:478 #10 0x000055e74315bc52 in do_backup_common (backup=backup@entry=0x55e746c066d0, bs=bs@entry=0x55e7458d5820, target_bs=target_bs@entry=0x55e74589f640, aio_context=aio_context@entry=0x55e7458a91e0, txn=txn@entry=0x0, errp=errp@entry=0x7ffddfd1efb0) at blockdev.c:3580 #11 0x000055e74315c37c in do_blockdev_backup (backup=backup@entry=0x55e746c066d0, txn=0x0, errp=errp@entry=0x7ffddfd1efb0) at /usr/src/debug/qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64/./qapi/qapi-types-block-core.h:1492 #12 0x000055e74315c449 in blockdev_backup_prepare (common=0x55e746a8de90, errp=0x7ffddfd1f018) at blockdev.c:1885 #13 0x000055e743160152 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x55e7467fe2c0, errp=errp@entry=0x7ffddfd1f088) at blockdev.c:2340 #14 0x000055e743287ff5 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffddfd1f0f8) at qapi/qapi-commands-transaction.c:44 #15 0x000055e74333de6c in do_qmp_dispatch (errp=0x7ffddfd1f0f0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55e743c28d60 <qmp_commands>) at qapi/qmp-dispatch.c:132 #16 0x000055e74333de6c in qmp_dispatch (cmds=0x55e743c28d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175 #17 0x000055e74325c061 in monitor_qmp_dispatch (mon=0x55e745908030, req=<optimized out>) at monitor/qmp.c:145 #18 0x000055e74325c6fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:234 #19 0x000055e743385866 in aio_bh_call (bh=0x55e745807ae0) at util/async.c:117 #20 0x000055e743385866 in aio_bh_poll (ctx=ctx@entry=0x55e7458067a0) at util/async.c:117 #21 0x000055e743388c54 in aio_dispatch (ctx=0x55e7458067a0) at util/aio-posix.c:459 #22 0x000055e743385742 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260 #23 0x00007fb68543e67d in g_main_dispatch (context=0x55e745893a40) at gmain.c:3176 #24 0x00007fb68543e67d in g_main_context_dispatch (context=context@entry=0x55e745893a40) at gmain.c:3829 #25 0x000055e743387d08 in glib_pollfds_poll () at util/main-loop.c:219 #26 0x000055e743387d08 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242 #27 0x000055e743387d08 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518 #28 0x000055e74316a3c1 in main_loop () at vl.c:1828 #29 0x000055e743016a72 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504 Fix this by not acquiring the AioContext there, and ensuring all paths leading to it have it already acquired (backup_clean()). RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782111 Signed-off-by: Sergio Lopez <slp@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-25block/backup: drop dead code from backup_job_createVladimir Sementsov-Ogievskiy
After commit 00e30f05de1d195, there is no more "goto error" points after job creation, so after "error:" @job is always NULL and we don't need roll-back job creation. Reported-by: Coverity (CID 1406402) Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-10-17block/dirty-bitmap: add bs linkVladimir Sementsov-Ogievskiy
Add bs field to BdrvDirtyBitmap structure. Drop BlockDriverState parameter from bitmap APIs where possible. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20190916141911.5255-3-vsementsov@virtuozzo.com [Rebased on top of block-copy. --js] Signed-off-by: John Snow <jsnow@redhat.com>
2019-10-10block/backup: use backup-top instead of write notifiersVladimir Sementsov-Ogievskiy
Drop write notifiers and use filter node instead. = Changes = 1. Add filter-node-name argument for backup qmp api. We have to do it in this commit, as 257 needs to be fixed. 2. There are no more write notifiers here, so is_write_notifier parameter is dropped from block-copy paths. 3. To sync with in-flight requests at job finish we now have drained removing of the filter, we don't need rw-lock. 4. Block-copy is now using BdrvChildren instead of BlockBackends 5. As backup-top owns these children, we also move block-copy state into backup-top's ownership. = Iotest changes = 56: op-blocker doesn't shoot now, as we set it on source, but then check on filter, when trying to start second backup. To keep the test we instead can catch another collision: both jobs will get 'drive0' job-id, as job-id parameter is unspecified. To prevent interleaving with file-posix locks (as they are dependent on config) let's use another target for second backup. Also, it's obvious now that we'd like to drop this op-blocker at all and add a test-case for two backups from one node (to different destinations) actually works. But not in these series. 141: Output changed: prepatch, "Node is in use" comes from bdrv_has_blk check inside qmp_blockdev_del. But we've dropped block-copy blk objects, so no more blk objects on source bs (job blk is on backup-top filter bs). New message is from op-blocker, which is the next check in qmp_blockdev_add. 257: The test wants to emulate guest write during backup. They should go to filter node, not to original source node, of course. Therefore we need to specify filter node name and use it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20191001131409.14202-6-vsementsov@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/block-copy: split block_copy_set_callbacks functionVladimir Sementsov-Ogievskiy
Split block_copy_set_callbacks out of block_copy_state_new. It's needed for further commit: block-copy will use BdrvChildren of backup-top filter, so it will be created from backup-top filter creation function. But callbacks will still belong to backup job and will be set in separate. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20191001131409.14202-4-vsementsov@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: move write_flags calculation inside backup_job_createVladimir Sementsov-Ogievskiy
This is logic-less refactoring, which simplifies further patch, as we'll need write_flags for backup-top filter creation and backup-top should be created before block job creation. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20191001131409.14202-3-vsementsov@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: move in-flight requests handling from backup to block-copyVladimir Sementsov-Ogievskiy
Move synchronization mechanism to block-copy, to be able to use one block-copy instance from backup job and backup-top filter in parallel. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20191001131409.14202-2-vsementsov@virtuozzo.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block: move block_copy from block/backup.c to separate fileVladimir Sementsov-Ogievskiy
Split block_copy to separate file, to be cleanly shared with backup-top filter driver in further commits. It's a clean movement, the only change is drop "static" from interface functions. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190920142056.12778-8-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: fix block-comment styleVladimir Sementsov-Ogievskiy
We need to fix comment style around block-copy functions before further moving them to separate file to satisfy checkpatch. But do more: fix all comments style. Also, seems like doubled first asterisk is not forbidden, but drop it too for consistency. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190920142056.12778-7-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: introduce BlockCopyStateVladimir Sementsov-Ogievskiy
Split copying code part from backup to "block-copy", including separate state structure and function renaming. This is needed to share it with backup-top filter driver in further commits. Notes: 1. As BlockCopyState keeps own BlockBackend objects, remaining job->common.blk users only use it to get bs by blk_bs() call, so clear job->commen.blk permissions set in block_job_create and add job->source_bs to be used instead of blk_bs(job->common.blk), to keep it more clear which bs we use when introduce backup-top filter in further commit. 2. Rename s/initializing_bitmap/skip_unallocated/ to sound a bit better as interface to BlockCopyState 3. Split is not very clean: there left some duplicated fields, backup code uses some BlockCopyState fields directly, let's postpone it for further improvements and keep this comment simpler for review. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190920142056.12778-6-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: improve comment about image fleecingVladimir Sementsov-Ogievskiy
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190920142056.12778-5-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: split shareable copying part from backup_do_cowVladimir Sementsov-Ogievskiy
Split copying logic which will be shared with backup-top filter. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190920142056.12778-4-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: fix backup_cow_with_offload for last clusterVladimir Sementsov-Ogievskiy
We shouldn't try to copy bytes beyond EOF. Fix it. Fixes: 9ded4a0114968e Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20190920142056.12778-3-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-10-10block/backup: fix max_transfer handling for copy_rangeVladimir Sementsov-Ogievskiy
Of course, QEMU_ALIGN_UP is a typo, it should be QEMU_ALIGN_DOWN, as we are trying to find aligned size which satisfy both source and target. Also, don't ignore too small max_transfer. In this case seems safer to disable copy_range. Fixes: 9ded4a0114968e Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190920142056.12778-2-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-09-10job: drop job_drainVladimir Sementsov-Ogievskiy
In job_finish_sync job_enter should be enough for a job to make some progress and draining is a wrong tool for it. So use job_enter directly here and drop job_drain with all related staff not used more. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Tested-by: John Snow <jsnow@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2019-08-27block: define .*_part io handlers in BlockDriverVladimir Sementsov-Ogievskiy
Add handlers supporting qiov_offset parameter: bdrv_co_preadv_part bdrv_co_pwritev_part bdrv_co_pwritev_compressed_part This is used to reduce need of defining local_qiovs and hd_qiovs in all corners of block layer code. The following patches will increase usage of this new API part by part. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20190604161514.262241-5-vsementsov@virtuozzo.com Message-Id: <20190604161514.262241-5-vsementsov@virtuozzo.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-08-16block/backup: refactor write_flagsVladimir Sementsov-Ogievskiy
write flags are constant, let's store it in BackupBlockJob instead of recalculating. It also makes two boolean fields to be unused, so, drop them. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190730163251.755248-4-vsementsov@virtuozzo.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: deal with zero detectionVladimir Sementsov-Ogievskiy
We have detect_zeroes option, so at least for blockdev-backup user should define it if zero-detection is needed. For drive-backup leave detection enabled by default but do it through existing option instead of open-coding. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190730163251.755248-2-vsementsov@virtuozzo.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: support bitmap sync modes for non-bitmap backupsJohn Snow
Accept bitmaps and sync policies for the other backup modes. This allows us to do things like create a bitmap synced to a full backup without a transaction, or start a resumable backup process. Some combinations don't make sense, though: - NEVER policy combined with any non-BITMAP mode doesn't do anything, because the bitmap isn't used for input or output. It's harmless, but is almost certainly never what the user wanted. - sync=NONE is more questionable. It can't use on-success because this job never completes with success anyway, and the resulting artifact of 'always' is suspect: because we start with a full bitmap and only copy out segments that get written to, the final output bitmap will always be ... a fully set bitmap. Maybe there's contexts in which bitmaps make sense for sync=none, but not without more severe changes to the current job, and omitting it here doesn't prevent us from adding it later. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-11-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: teach TOP to never copy unallocated regionsJohn Snow
Presently, If sync=TOP is selected, we mark the entire bitmap as dirty. In the write notifier handler, we dutifully copy out such regions. Fix this in three parts: 1. Mark the bitmap as being initialized before the first yield. 2. After the first yield but before the backup loop, interrogate the allocation status asynchronously and initialize the bitmap. 3. Teach the write notifier to interrogate allocation status if it is invoked during bitmap initialization. As an effect of this patch, the job progress for TOP backups now behaves like this: - total progress starts at bdrv_length. - As allocation status is interrogated, total progress decreases. - As blocks are copied, current progress increases. Taken together, the floor and ceiling move to meet each other. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20190716000117.25219-10-jsnow@redhat.com [Remove ret = -ECANCELED change. --js] [Squash in conflict resolution based on Max's patch --js] Message-id: c8b0ab36-79c8-0b4b-3193-4e12ed8c848b@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: add backup_is_cluster_allocatedJohn Snow
Modify bdrv_is_unallocated_range to utilize the pnum return from bdrv_is_allocated, and in the process change the semantics from "is unallocated" to "is allocated." Optionally returns a full number of clusters that share the same allocation status. This will be used to carefully toggle bits in the bitmap for sync=top initialization in the following commits. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-9-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: centralize copy_bitmap initializationJohn Snow
Just a few housekeeping changes that keeps the following commit easier to read; perform the initial copy_bitmap initialization in one place. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-8-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: improve sync=bitmap work estimatesJohn Snow
When making backups based on bitmaps, the work estimate can be more accurate. Update iotests to reflect the new strategy. TOP work estimates are broken, but do not get worse with this commit. That issue is addressed in the following commits instead. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-7-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2019-08-16block/backup: hoist bitmap check into QMP interfaceJohn Snow
This is nicer to do in the unified QMP interface that we have now, because it lets us use the right terminology back at the user. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-5-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>