aboutsummaryrefslogtreecommitdiff
path: root/tests/unit
diff options
context:
space:
mode:
authorEmanuele Giuseppe Esposito <eesposit@redhat.com>2023-09-29 16:51:40 +0200
committerKevin Wolf <kwolf@redhat.com>2023-10-12 16:31:33 +0200
commitd05ab380db649d882396653f9830b67d84bffbe1 (patch)
treeb52f78b422f88e78caba4fde57d8a425519cd295 /tests/unit
parent2b3912f1350971fbc2c04d986a1d0c60ae757c78 (diff)
block: Mark drain related functions GRAPH_RDLOCK
Draining recursively traverses the graph, therefore we need to make sure that also such accesses to the graph are protected by the graph rdlock. There are 3 different drain callers to consider: 1. drain in the main loop: no issue at all, rdlock is nop. 2. drain in an iothread: rdlock only works in main loop or coroutines, so disallow it. 3. drain in a coroutine (regardless of AioContext): the drain mechanism takes care of scheduling a BH in the bs->aio_context that will then take care of perform the actual draining. This is wrong, because as pointed in (2) if bs->aio_context is an iothread then rdlock won't work. Therefore change bdrv_co_yield_to_drain to schedule the BH in the main loop. Caller (2) also implies that we need to modify test-bdrv-drain.c to disallow draining in the iothreads. For some places, we know that they will hold the lock, but we don't have the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock() with a FIXME comment. These places will be removed once everything is properly annotated. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-6-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'tests/unit')
-rw-r--r--tests/unit/test-bdrv-drain.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index bdd3757615..d734829778 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -1196,7 +1196,7 @@ static void coroutine_mixed_fn detach_by_parent_aio_cb(void *opaque, int ret)
}
}
-static void detach_by_driver_cb_drained_begin(BdrvChild *child)
+static void GRAPH_RDLOCK detach_by_driver_cb_drained_begin(BdrvChild *child)
{
struct detach_by_parent_data *data = &detach_by_parent_data;
@@ -1233,7 +1233,7 @@ static BdrvChildClass detach_by_driver_cb_class;
* state is messed up, but if it is only polled in the single
* BDRV_POLL_WHILE() at the end of the drain, this should work fine.
*/
-static void test_detach_indirect(bool by_parent_cb)
+static void TSA_NO_TSA test_detach_indirect(bool by_parent_cb)
{
BlockBackend *blk;
BlockDriverState *parent_a, *parent_b, *a, *b, *c;