aboutsummaryrefslogtreecommitdiff
path: root/block/io_uring.c
diff options
context:
space:
mode:
authorStefan Hajnoczi <stefanha@redhat.com>2023-09-13 16:00:44 -0400
committerKevin Wolf <kwolf@redhat.com>2023-10-31 15:42:14 +0100
commit84d61e5f36a73ed24742b7c7cf7b811e456dd024 (patch)
tree420c313770c60a0f9aaa2487b296c6883535b4b0 /block/io_uring.c
parent433fcea40c31ff355f84da22a46977c2a1b596c3 (diff)
virtio: use defer_call() in virtio_irqfd_notify()
virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used Buffer Notifications from an IOThread. This involves an eventfd write(2) syscall. Calling this repeatedly when completing multiple I/O requests in a row is wasteful. Use the defer_call() API to batch together virtio_irqfd_notify() calls made during thread pool (aio=threads), Linux AIO (aio=native), and io_uring (aio=io_uring) completion processing. Behavior is unchanged for emulated devices that do not use defer_call_begin()/defer_call_end() since defer_call() immediately invokes the callback when called outside a defer_call_begin()/defer_call_end() region. fio rw=randread bs=4k iodepth=64 numjobs=8 IOPS increases by ~9% with a single IOThread and 8 vCPUs. iodepth=1 decreases by ~1% but this could be noise. Detailed performance data and configuration specifics are available here: https://gitlab.com/stefanha/virt-playbooks/-/tree/blk_io_plug-irqfd This duplicates the BH that virtio-blk uses for batching. The next commit will remove it. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230913200045.1024233-4-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'block/io_uring.c')
-rw-r--r--block/io_uring.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/block/io_uring.c b/block/io_uring.c
index 3a1e1f45b3..7cdd00e9f1 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -125,6 +125,9 @@ static void luring_process_completions(LuringState *s)
{
struct io_uring_cqe *cqes;
int total_bytes;
+
+ defer_call_begin();
+
/*
* Request completion callbacks can run the nested event loop.
* Schedule ourselves so the nested event loop will "see" remaining
@@ -217,7 +220,10 @@ end:
aio_co_wake(luringcb->co);
}
}
+
qemu_bh_cancel(s->completion_bh);
+
+ defer_call_end();
}
static int ioq_submit(LuringState *s)