Age | Commit message (Collapse) | Author |
|
This new header file provides heavy-weight "global" memory barriers that
enforce memory ordering on each running thread belonging to the current
process. For now, use a dummy implementation that issues memory barriers
on both sides (matching what QEMU has been doing so far).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Prepare for introducing smp_mb_placeholder() and smp_mb_global().
The new smp_mb() in synchronize_rcu() is not strictly necessary, since
the first atomic_mb_set for rcu_gp_ctr provides the required ordering.
However, synchronize_rcu is not performance critical, and it *will* be
necessary to introduce a smp_mb_global before calling wait_for_readers().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Since there are some issues in memory alloc/free machenism
in glibc for little chunk memory, if Qemu frequently
alloc/free little chunk memory, the glibc doesn't alloc
little chunk memory from free list of glibc and still
allocate from OS, which make the heap size bigger and bigger.
This patch introduce malloc_trim(), which will free heap
memory when there is no rcu call during rcu thread loop.
malloc_trim() can be enabled/disabled by --enable-malloc-trim/
--disable-malloc-trim in the Qemu configure command. The
default malloc_trim() is enabled for libc.
Below are test results from smaps file.
(1)without patch
55f0783e1000-55f07992a000 rw-p 00000000 00:00 0 [heap]
Size: 21796 kB
Rss: 14260 kB
Pss: 14260 kB
(2)with patch
55cc5fadf000-55cc61008000 rw-p 00000000 00:00 0 [heap]
Size: 21668 kB
Rss: 6940 kB
Pss: 6940 kB
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <1513775806-19779-1-git-send-email-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This reverts commit a59629fcc6f603e19b516dc08f75334e5c480bd0.
This is not needed anymore because the IOThread mutex is not
"magic" anymore (need not kick the CPU thread)and also because
fork callbacks are only enabled at the very beginning of
QEMU's execution.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Because of -daemonize, system mode QEMU sometimes needs to fork() and
keep RCU enabled in the child. However, there is a possible deadlock
with synchronize_rcu:
- the CPU thread is inside a RCU critical section and wants to take
the BQL in order to do MMIO
- the monitor thread, which is owning the BQL, calls rcu_init_lock
which tries to take the rcu_sync_lock
- the call_rcu thread has taken rcu_sync_lock in synchronize_rcu, but
synchronize_rcu needs the CPU thread to end the critical section
before returning.
This cannot happen for user-mode emulation, because it does not have
a BQL.
To fix it, assume that system mode QEMU only forks in preparation for
exec (except when daemonizing) and disable pthread_atfork as soon as
the double fork has happened.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Thanks to the acquire semantics of qemu_event_reset and qemu_event_wait,
some memory barriers can be removed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454089805-5470-6-git-send-email-peter.maydell@linaro.org
|
|
This reverts commit 5243722376873a48e9852a58b91f4d4101ee66e4.
The patch forgot about rcu_sync_lock and was committed by mistake.
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
We were unlocking this lock after fork, which is wrong since
only the thread that holds a mutex is allowed to unlock it.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1440375847-17603-9-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
If rcu_(un)register_thread() is called together with synchronize_rcu(),
it will wait for the synchronize_rcu() to finish. But when synchronize_rcu()
waits for some events, we can modify the list registry.
We also use the lock rcu_gp_lock to assume that synchronize_rcu() isn't
executed in more than one thread at the same time. Add a new mutex lock
rcu_sync_lock to assume it and rename rcu_gp_lock to rcu_registry_lock.
Release rcu_registry_lock when synchronize_rcu() waits for some events.
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Message-Id: <55B59652.4090503@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Otherwise, grace periods are detected too early!
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
If QEMU forks after the CPU threads have been created, qemu_mutex_lock_iothread
will not be able to do qemu_cpu_kick_thread. There is no solution other than
assuming that forks after the CPU threads have been created will end up in an
exec. Forks before the CPU threads have been created (such as -daemonize)
have to call rcu_after_fork manually.
Notably, the oxygen theme for GTK+ forks and shows a "No such process" error
without this patch.
This patch can be reverted once the iothread loses the "kick the TCG thread"
magic.
User-mode emulation does not use the iothread, so it can also call
rcu_after_fork.
Reported by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
After forking, only the calling thread is duplicated in the child process.
The call_rcu thread has to be recreated in the child. Exploit the fact
that only one thread exists (same as when constructors run), and just redo
the entire initialization to ensure the threads are in the proper state.
The only additional things to do are emptying the list of threads
registered with RCU, and unlocking the lock that was taken in the prepare
callback (implementations are allowed to fail pthread_mutex_init()
if the mutex is still locked).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This needs to go away sooner or later, but one complication is the
complex VFIO data structures that are modified in instance_finalize.
Take a shortcut for now.
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Always process them within a short time. Even though waiting a little
is useful, it is not okay to delay e.g. qemu_opts_del forever.
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Asynchronous callbacks provided by call_rcu are particularly important
for QEMU, because the BQL makes it hard to use synchronize_rcu.
In addition, the current RCU implementation is not particularly friendly
to multiple concurrent synchronize_rcu callers, making call_rcu even
more important.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This includes a (mangled) copy of the liburcu code. The main changes
are: 1) removing dependencies on many other header files in liburcu; 2)
removing for simplicity the tentative busy waiting in synchronize_rcu,
which has limited performance effects; 3) replacing futexes in
synchronize_rcu with QemuEvents for Win32 portability. The API is
the same as liburcu, so it should be possible in the future to require
liburcu on POSIX systems for example and use our copy only on Windows.
Among the various versions available I chose urcu-mb, which is the
least invasive implementation even though it does not have the
fastest rcu_read_{lock,unlock} implementation. The urcu flavor can
be changed later, after benchmarking.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|