aboutsummaryrefslogtreecommitdiff
path: root/util/thread-pool.c
AgeCommit message (Collapse)Author
2023-08-02thread-pool: signal "request_cond" while lockedAnthony PERARD
thread_pool_free() might have been called on the `pool`, which would be a reason for worker_thread() to quit. In this case, `pool->request_cond` is been destroyed. If worker_thread() didn't managed to signal `request_cond` before it been destroyed by thread_pool_free(), we got: util/qemu-thread-posix.c:198: qemu_cond_signal: Assertion `cond->initialized' failed. One backtrace: __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "util/qemu-thread-posix.c", line=198, function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101 qemu_cond_signal (cond=0x7fffb800db30) at util/qemu-thread-posix.c:198 worker_thread (opaque=0x7fffb800dab0) at util/thread-pool.c:129 qemu_thread_start (args=0x7fffb8000b20) at util/qemu-thread-posix.c:505 start_thread (arg=<optimized out>) at pthread_create.c:486 Reported here: https://lore.kernel.org/all/ZJwoK50FcnTSfFZ8@MacBook-Air-de-Roger.local/T/#u To avoid issue, keep lock while sending a signal to `request_cond`. Fixes: 900fa208f506 ("thread-pool: replace semaphore with condition variable") Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230714152720.5077-1-anthony.perard@citrix.com> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> (cherry picked from commit f4f71363fcdb1092ff64d2bba6f9af39570c2f2b) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2022-05-25thread-pool: remove stopping variablePaolo Bonzini
Just setting the max threads to 0 is enough to stop all workers. Message-Id: <20220514065012.1149539-4-pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25thread-pool: replace semaphore with condition variablePaolo Bonzini
Since commit f9fc8932b1 ("thread-posix: remove the posix semaphore support", 2022-04-06) QemuSemaphore has its own mutex and condition variable; this adds unnecessary overhead on I/O with small block sizes. Check the QTAILQ directly instead of adding the indirection of a semaphore's count. Using a semaphore has not been necessary since qemu_cond_timedwait was introduced; the new code has to be careful about spurious wakeups but it is simpler, for example thread_pool_cancel does not have to worry about synchronizing the semaphore count with the number of elements of pool->request_list. Note that the return value of qemu_cond_timedwait (0 for timeout, 1 for signal or spurious wakeup) is different from that of qemu_sem_timedwait (-1 for timeout, 0 for success). Reported-by: Lukáš Doktor <ldoktor@redhat.com> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com> Message-Id: <20220514065012.1149539-3-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-25thread-pool: optimize scheduling of completion bottom halfPaolo Bonzini
The completion bottom half was scheduled within the pool->lock critical section. That actually results in worse performance, because the worker thread can run its own small critical section and go to sleep before the bottom half starts running. Note that this simple change does not produce an improvement without changing the thread pool QemuSemaphore to a condition variable. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com> Message-Id: <20220514065012.1149539-2-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-09util/event-loop-base: Introduce options to set the thread pool sizeNicolas Saenz Julienne
The thread pool regulates itself: when idle, it kills threads until empty, when in demand, it creates new threads until full. This behaviour doesn't play well with latency sensitive workloads where the price of creating a new thread is too high. For example, when paired with qemu's '-mlock', or using safety features like SafeStack, creating a new thread has been measured take multiple milliseconds. In order to mitigate this let's introduce a new 'EventLoopBase' property to set the thread pool size. The threads will be created during the pool's initialization or upon updating the property's value, remain available during its lifetime regardless of demand, and destroyed upon freeing it. A properly characterized workload will then be able to configure the pool to avoid any latency spikes. Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Message-id: 20220425075723.20019-4-nsaenzju@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2020-05-04lockable: replaced locks with lock guard macros where appropriateDaniel Brodsky
- ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets - replaced result with QEMU_LOCK_GUARD if all unlocks at function end - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 20200404042108.389635-3-dnbrdsky@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
2017-03-17thread-pool: add missing qemu_bh_cancel in completion functionPeter Lieven
commit 3c80ca15 fixed a deadlock scenarion with nested aio_poll invocations. However, the rescheduling of the completion BH introcuded unnecessary spinning in the main-loop. On very fast file backends this can even lead to the "WARNING: I/O thread spun for 1000 iterations" message popping up. Callgrind reports about 3-4% less instructions with this patch running qemu-img bench on a ramdisk based VMDK file. Fixes: 3c80ca158c96ff902a30883a8933e755988948b1 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-02-21block: explicitly acquire aiocontext in aio callbacks that need itPaolo Bonzini
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170213135235.12274-16-pbonzini@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-02-21block: explicitly acquire aiocontext in bottom halves that need itPaolo Bonzini
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 20170213135235.12274-15-pbonzini@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-02-21block: move AioContext, QEMUTimer, main-loop to libqemuutilPaolo Bonzini
AioContext is fairly self contained, the only dependency is QEMUTimer but that in turn doesn't need anything else. So move them out of block-obj-y to avoid introducing a dependency from io/ to block-obj-y. main-loop and its dependency iohandler also need to be moved, because later in this series io/ will call iohandler_get_aio_context. [Changed copyright "the QEMU team" to "other QEMU contributors" as suggested by Daniel Berrange and agreed by Paolo. --Stefan] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 20170213135235.12274-2-pbonzini@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>