aboutsummaryrefslogtreecommitdiff
path: root/stubs/iothread-lock.c
AgeCommit message (Collapse)Author
2018-08-23qsp: track BQL callers explicitlyEmilio G. Cota
The BQL is acquired via qemu_mutex_lock_iothread(), which makes the profiler assign the associated wait time (i.e. most of BQL wait time) entirely to that function. This loses the original call site information, which does not help diagnose BQL contention. Fix it by tracking the callers explicitly. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-04stubs: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1454089805-5470-3-git-send-email-peter.maydell@linaro.org
2015-07-01main-loop: introduce qemu_mutex_iothread_lockedPaolo Bonzini
This function will be used to avoid recursive locking of the iothread lock whenever address_space_rw/ld*/st* are called with the BQL held, which is almost always the case. Tracking whether the iothread is owned is very cheap (just use a TLS variable) but requires some care because now the lock must always be taken with qemu_mutex_lock_iothread(). Previously this wasn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-01-12stubs: fully replace qemu-tool.c and qemu-user.cPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>