diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2015-06-18 18:47:19 +0200 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2015-07-01 15:45:50 +0200 |
commit | afbe70535ff1a8a7a32910cc15ebecc0ba92e7da (patch) | |
tree | 1e12a91857ba9ce038e7c1c05fd901add373b36b /cpus.c | |
parent | 2e7f7a3c86f884a77296a137b7c730a4d580c5c9 (diff) |
main-loop: introduce qemu_mutex_iothread_locked
This function will be used to avoid recursive locking of the iothread lock
whenever address_space_rw/ld*/st* are called with the BQL held, which is
almost always the case.
Tracking whether the iothread is owned is very cheap (just use a TLS
variable) but requires some care because now the lock must always be
taken with qemu_mutex_lock_iothread(). Previously this wasn't the case.
Outside TCG mode this is not a problem. In TCG mode, we need to be
careful and avoid the "prod out of compiled code" step if already
in a VCPU thread. This is easily done with a check on current_cpu,
i.e. qemu_in_vcpu_thread().
Hopefully, multithreaded TCG will get rid of the whole logic to kick
VCPUs whenever an I/O event occurs!
Cc: Frederic Konrad <fred.konrad@greensocs.com>
Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'cpus.c')
-rw-r--r-- | cpus.c | 9 |
1 files changed, 9 insertions, 0 deletions
@@ -1146,6 +1146,13 @@ bool qemu_in_vcpu_thread(void) return current_cpu && qemu_cpu_is_self(current_cpu); } +static __thread bool iothread_locked = false; + +bool qemu_mutex_iothread_locked(void) +{ + return iothread_locked; +} + void qemu_mutex_lock_iothread(void) { atomic_inc(&iothread_requesting_mutex); @@ -1164,10 +1171,12 @@ void qemu_mutex_lock_iothread(void) atomic_dec(&iothread_requesting_mutex); qemu_cond_broadcast(&qemu_io_proceeded_cond); } + iothread_locked = true; } void qemu_mutex_unlock_iothread(void) { + iothread_locked = false; qemu_mutex_unlock(&qemu_global_mutex); } |