diff options
author | Roman Kagan <rkagan@virtuozzo.com> | 2019-05-23 10:54:48 +0000 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-08-20 17:26:22 +0200 |
commit | e533f45d7dd85d6514de3f7a433f7dc4313e8f62 (patch) | |
tree | d9884ed20f33cec928d46d9e4381f86b36b1b84e /cpus-common.c | |
parent | 9e9b10c6491153b60ccfd021328f1f88e1669550 (diff) |
cpus-common: nuke finish_safe_work
It was introduced in commit ab129972c8b41e15b0521895a46fd9c752b68a5e,
with the following motivation:
Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
However, it seems to be redundant, because the cpu-exclusive
infrastructure provides suffificent protection against the newly added
CPU starting execution while the cpu-exclusive work is running, and the
aforementioned traversing of the cpu list is protected by
qemu_cpu_list_lock.
Besides, this appears to be the only place where the cpu-exclusive
section is entered with the BQL taken, which has been found to trigger
AB-BA deadlock as follows:
vCPU thread main thread
----------- -----------
async_safe_run_on_cpu(self,
async_synic_update)
... [cpu hot-add]
process_queued_cpu_work()
qemu_mutex_unlock_iothread()
[grab BQL]
start_exclusive() cpu_list_add()
async_synic_update() finish_safe_work()
qemu_mutex_lock_iothread() cpu_exec_start()
So remove it. This paves the way to establishing a strict nesting rule
of never entering the exclusive section with the BQL taken.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
Diffstat (limited to 'cpus-common.c')
-rw-r--r-- | cpus-common.c | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/cpus-common.c b/cpus-common.c index 3ca58c64e8..023cfebfa3 100644 --- a/cpus-common.c +++ b/cpus-common.c @@ -69,12 +69,6 @@ static int cpu_get_free_index(void) return cpu_index; } -static void finish_safe_work(CPUState *cpu) -{ - cpu_exec_start(cpu); - cpu_exec_end(cpu); -} - void cpu_list_add(CPUState *cpu) { qemu_mutex_lock(&qemu_cpu_list_lock); @@ -86,8 +80,6 @@ void cpu_list_add(CPUState *cpu) } QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node); qemu_mutex_unlock(&qemu_cpu_list_lock); - - finish_safe_work(cpu); } void cpu_list_remove(CPUState *cpu) |