diff options
author | Stefan Hajnoczi <stefanha@redhat.com> | 2024-01-18 09:48:23 -0500 |
---|---|---|
committer | Michael Tokarev <mjt@tls.msk.ru> | 2024-01-26 19:31:33 +0300 |
commit | f389309d2937e66960dd371014a1971678fb4ce7 (patch) | |
tree | f0970cb0ba52b5189b44b2a8db0d0a577c43a546 /qapi | |
parent | 823892d19f6b79dc44ac1a885a9ceed0bd0965e7 (diff) |
monitor: only run coroutine commands in qemu_aio_context
monitor_qmp_dispatcher_co() runs in the iohandler AioContext that is not
polled during nested event loops. The coroutine currently reschedules
itself in the main loop's qemu_aio_context AioContext, which is polled
during nested event loops. One known problem is that QMP device-add
calls drain_call_rcu(), which temporarily drops the BQL, leading to all
sorts of havoc like other vCPU threads re-entering device emulation code
while another vCPU thread is waiting in device emulation code with
aio_poll().
Paolo Bonzini suggested running non-coroutine QMP handlers in the
iohandler AioContext. This avoids trouble with nested event loops. His
original idea was to move coroutine rescheduling to
monitor_qmp_dispatch(), but I resorted to moving it to qmp_dispatch()
because we don't know if the QMP handler needs to run in coroutine
context in monitor_qmp_dispatch(). monitor_qmp_dispatch() would have
been nicer since it's associated with the monitor implementation and not
as general as qmp_dispatch(), which is also used by qemu-ga.
A number of qemu-iotests need updated .out files because the order of
QMP events vs QMP responses has changed.
Solves Issue #1933.
Cc: qemu-stable@nongnu.org
Fixes: 7bed89958bfbf40df9ca681cefbdca63abdde39d ("device_core: use drain_call_rcu in in qmp_device_add")
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2215192
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2214985
Buglink: https://issues.redhat.com/browse/RHEL-17369
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240118144823.1497953-4-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit effd60c878176bcaf97fa7ce2b12d04bb8ead6f7)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Diffstat (limited to 'qapi')
-rw-r--r-- | qapi/qmp-dispatch.c | 24 |
1 files changed, 23 insertions, 1 deletions
diff --git a/qapi/qmp-dispatch.c b/qapi/qmp-dispatch.c index 555528b6bb..176b549473 100644 --- a/qapi/qmp-dispatch.c +++ b/qapi/qmp-dispatch.c @@ -206,9 +206,31 @@ QDict *coroutine_mixed_fn qmp_dispatch(const QmpCommandList *cmds, QObject *requ assert(!(oob && qemu_in_coroutine())); assert(monitor_cur() == NULL); if (!!(cmd->options & QCO_COROUTINE) == qemu_in_coroutine()) { + if (qemu_in_coroutine()) { + /* + * Move the coroutine from iohandler_ctx to qemu_aio_context for + * executing the command handler so that it can make progress if it + * involves an AIO_WAIT_WHILE(). + */ + aio_co_schedule(qemu_get_aio_context(), qemu_coroutine_self()); + qemu_coroutine_yield(); + } + monitor_set_cur(qemu_coroutine_self(), cur_mon); cmd->fn(args, &ret, &err); monitor_set_cur(qemu_coroutine_self(), NULL); + + if (qemu_in_coroutine()) { + /* + * Yield and reschedule so the main loop stays responsive. + * + * Move back to iohandler_ctx so that nested event loops for + * qemu_aio_context don't start new monitor commands. + */ + aio_co_schedule(iohandler_get_aio_context(), + qemu_coroutine_self()); + qemu_coroutine_yield(); + } } else { /* * Actual context doesn't match the one the command needs. @@ -232,7 +254,7 @@ QDict *coroutine_mixed_fn qmp_dispatch(const QmpCommandList *cmds, QObject *requ .errp = &err, .co = qemu_coroutine_self(), }; - aio_bh_schedule_oneshot(qemu_get_aio_context(), do_qmp_dispatch_bh, + aio_bh_schedule_oneshot(iohandler_get_aio_context(), do_qmp_dispatch_bh, &data); qemu_coroutine_yield(); } |