aboutsummaryrefslogtreecommitdiff
path: root/cpus.c
AgeCommit message (Collapse)Author
2016-10-31*_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini
This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31cpus: re-factor out handle_icount_deadlineAlex Bennée
In preparation for adding a MTTCG thread we re-factor out a bit of what will be common code to handle the QEMU_CLOCK_VIRTUAL expiration. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-18-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31tcg: cpus rm tcg_exec_all()Alex Bennée
In preparation for multi-threaded TCG we remove tcg_exec_all and move all the CPU cycling into the main thread function. When MTTCG is enabled we shall use a separate thread function which only handles one vCPU. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-13-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31tcg: move tcg_exec_all and helpers above thread fnAlex Bennée
This is a pure mechanical change in preparation for up-coming re-factoring. Instead of a forward declaration for tcg_exec_all it and the associated helper functions are moved in front of the call from qemu_tcg_cpu_thread_fn. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-12-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31cpus: make all_vcpus_paused() return boolAlex Bennée
Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-2-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-26tcg: Add EXCP_ATOMICRichard Henderson
When we cannot emulate an atomic operation within a parallel context, this exception allows us to stop the world and try again in a serial context. Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-29qemu: use bdrv_flush_all for vm_stop et alJohn Snow
Reimplement bdrv_flush_all for vm_stop. In contrast to blk_flush_all, bdrv_flush_all does not have device model restrictions. This allows us to flush and halt unconditionally without error. This allows us to do things like migrate when we have a device with an open tray, but has a node that may need to be flushed, or nodes that aren't currently attached to any device and need to be flushed. Specifically, this allows us to migrate when we have a CDROM with an open tray. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-27replay: allow replay stopping and restartingPavel Dovgalyuk
This patch fixes bug with stopping and restarting replay through monitor. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20160926080815.6992.71818.stgit@PASHA-ISP> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini
This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move CPU work item management to common codeSergey Fedorov
Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: Rename flush_queued_work()Sergey Fedorov
To avoid possible confusion, rename flush_queued_work() to process_queued_cpu_work(). Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-6-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: Move common code out of {async_, }run_on_cpu()Sergey Fedorov
Move the code common between run_on_cpu() and async_run_on_cpu() into a new function queue_work_on_cpu(). Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-4-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus: pass CPUState to run_on_cpu helpersAlex Bennée
CPUState is a fairly common pointer to pass to these helpers. This means if you need other arguments for the async_run_on_cpu case you end up having to do a g_malloc to stuff additional data into the routine. For the current users this isn't a massive deal but for MTTCG this gets cumbersome when the only other parameter is often an address. This adds the typedef run_on_cpu_func for helper functions which has an explicit CPUState * passed as the first parameter. All the users of run_on_cpu and async_run_on_cpu have had their helpers updated to use CPUState where available. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [Sergey Fedorov: - eliminate more CPUState in user data; - remove unnecessary user data passing; - fix target-s390x/kvm.c and target-s390x/misc_helper.c] Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts) Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> (s390 parts) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-3-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-15Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* minor patches here and there * MTTCG: lock-free TB lookup * SCSI: bugfixes for MPTSAS, MegaSAS, LSI53c, vmw_pvscsi * buffer_is_zero rewrite (except for one patch) * chardev: qemu_chr_fe_write checks * checkpatch improvement for markdown preformatted text * default-configs cleanups * atomics cleanups # gpg: Signature made Tue 13 Sep 2016 18:14:30 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (58 commits) cutils: Add generic prefetch cutils: Add SSE4 version cutils: Add test for buffer_is_zero cutils: Remove ppc buffer zero checking cutils: Remove aarch64 buffer zero checking cutils: Rearrange buffer_is_zero acceleration cutils: Export only buffer_is_zero cutils: Remove SPLAT macro cutils: Move buffer_is_zero and subroutines to a new file ppc: do not redefine CPUPPCState x86/lapic: Load LAPIC state at post_load optionrom: do not rely on compiler's bswap optimization checkpatch: Fix whitespace checks for documentation code blocks atomics: Use __atomic_*_n() variant primitives atomics: Remove redundant barrier()'s kvm-all: drop kvm_setup_guest_memory i8257: Make device "i8257" unavailable with -device Revert "megasas: remove useless check for cmd->frame" char: convert qemu_chr_fe_write to qemu_chr_fe_write_all hw: replace most use of qemu_chr_fe_write with qemu_chr_fe_write_all ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Conflicts: cpus.c tests/Makefile.include
2016-09-13cpus: update commentsCao jin
The returned value of cpu_get_clock() is plused with the offset, so it is the time elapsed in virtual machine when vm is active. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc Peter Crosthwaite <crosthwaite.peter@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Message-Id: <1469790338-28990-4-git-send-email-caoj.fnst@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13cpus: rename local variable to meaningful oneCao jin
The function actually returns monotonic time value in nanosecond, the "ticks" is not suitable. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc Peter Crosthwaite <crosthwaite.peter@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Message-Id: <1469790338-28990-3-git-send-email-caoj.fnst@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-13timer/cpus: fix some typos and update some commentsCao jin
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-06-11seqlock: rename write_lock/unlock to write_begin/endEmilio G. Cota
It is a more appropriate name, now that the mutex embedded in the seqlock is gone. Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-4-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-06-11seqlock: remove optional mutexEmilio G. Cota
This option is unused; besides, it bloats the struct when not needed. Let's just let writers define their own locks elsewhere. Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1465412133-3029-3-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-30cpu: Add a sync version of cpu_remove()Bharata B Rao
This sync API will be used by the CPU hotplug code to wait for the CPU to completely get removed before flagging the failure to the device_add command. Sync version of this call is needed to correctly recover from CPU realization failures when ->plug() handler fails. Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-30cpu: Reclaim vCPU objectsGu Zheng
In order to deal well with the kvm vcpus (which can not be removed without any protection), we do not close KVM vcpu fd, just record and mark it as stopped into a list, so that we can reuse it for the appending cpu hot-add request if possible. It is also the approach that kvm guys suggested: https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com> Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu() isn't needed as it is done from cpu_exec_exit() - Use iothread mutex instead of global mutex during destroy - Don't cleanup vCPU object from vCPU thread context but leave it to the callers (device_add/device_del)] Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-23cpus: call the core nmi injection functionBandan Das
We can call the common function here directly since x86 specific actions will be taken care of by the arch specific nmi handler Signed-off-by: Bandan Das <bsd@redhat.com> Message-Id: <1463761717-26558-4-git-send-email-bsd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-23cpus.c: Use pthread_sigmask() rather than sigprocmask()Peter Maydell
On Linux, sigprocmask() and pthread_sigmask() are in practice the same thing (they only set the signal mask for the calling thread), but the documentation states that the behaviour of sigprocmask() in a multithreaded process is undefined. Use pthread_sigmask() instead (which is what we do in almost all places in QEMU that alter the signal mask already). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1463420039-29761-1-git-send-email-peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19cpu: move exec-all.h inclusion out of cpu.hPaolo Bonzini
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19qemu-common: push cpu.h inclusion out of qemu-common.hPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-04-05cpus: don't use atomic_read for vm_clock_warp_startAlex Bennée
As vm_clock_warp_start is a 64 bit value this causes problems for the compiler trying to come up with a suitable atomic operation on 32 bit hosts. Because the variable is protected by vm_clock_seqlock, we check its value inside a seqlock critical section. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1459780549-12942-2-git-send-email-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22Replaced get_tick_per_sec() by NANOSECONDS_PER_SECONDRutuja Shah
This patch replaces get_ticks_per_sec() calls with the macro NANOSECONDS_PER_SECOND. Also, as there are no callers, get_ticks_per_sec() is then removed. This replacement improves the readability and understandability of code. For example, timer_mod(fdctrl->result_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + (get_ticks_per_sec() / 50)); NANOSECONDS_PER_SECOND makes it obvious that qemu_clock_get_ns matches the unit of the expression on the right side of the plus. Signed-off-by: Rutuja Shah <rutu.shah.26@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-17block: Use blk_{commit,flush}_all() consistentlyMax Reitz
Replace bdrv_commmit_all() and bdrv_flush_all() by their BlockBackend equivalents. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-03-15icount: decouple warp callsPavel Dovgalyuk
qemu_clock_warp function is called to update virtual clock when CPU is sleeping. This function includes replay checkpoint to make execution deterministic in icount mode. Record/replay module flushes async event queue at checkpoints. Some of the events (e.g., block devices operations) include interaction with hardware. E.g., APIC polled by block devices sets one of IRQ flags. Flag to be set depends on currently executed thread (CPU or iothread). Therefore in replay mode we have to process the checkpoints in the same thread as they were recorded. qemu_clock_warp function (and its checkpoint) may be called from different thread. This patch decouples two different execution cases of this function: call when CPU is sleeping from iothread and call from cpu thread to update virtual clock. First task is performed by qemu_start_warp_timer function. It sets warp timer event to the moment of nearest pending virtual timer. Second function (qemu_account_warp_timer) is called from cpu thread before execution of the code. It advances virtual clock by adding the length of period while CPU was sleeping. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20160310115609.4812.44986.stgit@PASHA-ISP> [Update docs. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-15icount: remove obsolete warp callPavel Dovgalyuk
qemu_clock_warp call in qemu_tcg_wait_io_event function is not needed anymore, because it is called in every iteration of main_loop_wait. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20160310115603.4812.67559.stgit@PASHA-ISP> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-07icount: possible options for sleep are on or offPranith Kumar
icount sleep takes on or off as options. A few places mention sleep=no which is not accepted. This patch corrects them. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <1456499811-16819-1-git-send-email-bobby.prani@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-19qapi: Don't box branches of flat unionsEric Blake
There's no reason to do two malloc's for a flat union; let's just inline the branch struct directly into the C union branch of the flat union. Surprisingly, fewer clients were actually using explicit references to the branch types in comparison to the number of flat unions thus modified. This lets us reduce the hack in qapi-types:gen_variants() added in the previous patch; we no longer need to distinguish between alternates and flat unions. The change to unboxed structs means that u.data (added in commit cee2dedb) is now coincident with random fields of each branch of the flat union, whereas beforehand it was only coincident with pointers (since all branches of a flat union have to be objects). Note that this was already the case for simple unions - but there we got lucky. Remember, visit_start_union() blindly returns true for all visitors except for the dealloc visitor, where it returns the value !!obj->u.data, and that this result then controls whether to proceed with the visit to the variant. Pre-patch, this meant that flat unions were testing whether the boxed pointer was still NULL, and thereby skipping visit_end_implicit_struct() and avoiding a NULL dereference if the pointer had not been allocated. The same was true for simple unions where the current branch had pointer type, except there we bypassed visit_type_FOO(). But for simple unions where the current branch had scalar type, the contents of that scalar meant that the decision to call visit_type_FOO() was data-dependent - the reason we got lucky there is that visit_type_FOO() for all scalar types in the dealloc visitor is a no-op (only the pointer variants had anything to free), so it did not matter whether the dealloc visit was skipped. But with this patch, we would risk leaking memory if we could skip a call to visit_type_FOO_fields() based solely on a data-dependent decision. But notice: in the dealloc visitor, visit_type_FOO() already handles a NULL obj - it was only the visit_type_implicit_FOO() that was failing to check for NULL. And now that we have refactored things to have the branch be part of the parent struct, we no longer have a separate pointer that can be NULL in the first place. So we can just delete the call to visit_start_union() altogether, and blindly visit the branch type; there is no change in behavior except to the dealloc visitor, where we now unconditionally visit the branch, but where that visit is now always safe (for a flat union, we can no longer dereference NULL, and for a simple union, visit_type_FOO() was already safely handling NULL on pointer types). Unfortunately, simple unions are not as easy to switch to unboxed layout; because we are special-casing the hidden implicit type with a single 'data' member, we really DO need to keep calling another layer of visit_start_struct(), with a second malloc; although there are some cleanups planned for simple unions in later patches. visit_start_union() and gen_visit_implicit_struct() are now unused. Drop them. Note that after this patch, the only remaining use of visit_start_implicit_struct() is for alternate types; the next patch will do further cleanup based on that fact. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1455778109-6278-14-git-send-email-eblake@redhat.com> [Dead code deletion squashed in, commit message updated accordingly] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-02-08qapi: Fix compilation failure on MIPS and SPARCEric Blake
Commit 86f4b687 broke compilation on MIPS and SPARC, which have a preprocessor pollution of '#define mips 1' and '#define sparc 1', respectively. Treat it the same way as we do for the pollution with 'unix', so that QMP remains backwards compatible and only the C code needs to use the alternative 'q_mips', 'q_sparc' spelling. CC: James Hogan <james.hogan@imgtec.com> CC: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Eric Blake <eblake@redhat.com> Tested-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-01-29exec: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-4-git-send-email-peter.maydell@linaro.org
2016-01-26Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* chardev support for TLS and leak fix * NBD fix from Denis * condvar fix from Dave * kvm_stat and dump-guest-memory almost rewrite * mem-prealloc fix from Luiz * manpage style improvement # gpg: Signature made Tue 26 Jan 2016 14:58:18 GMT using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" * remotes/bonzini/tags/for-upstream: (49 commits) scripts/dump-guest-memory.py: Fix module docstring scripts/dump-guest-memory.py: Introduce multi-arch support scripts/dump-guest-memory.py: Cleanup functions scripts/dump-guest-memory.py: Improve python 3 compatibility scripts/dump-guest-memory.py: Make methods functions scripts/dump-guest-memory.py: Move constants to the top nbd: add missed aio_context_acquire in nbd_export_new memory: exit when hugepage allocation fails if mem-prealloc cpus: use broadcast on qemu_pause_cond scripts/kvm/kvm_stat: Add optparse description scripts/kvm/kvm_stat: Add interactive filtering scripts/kvm/kvm_stat: Fixup filtering scripts/kvm/kvm_stat: Fix rlimit for unprivileged users scripts/kvm/kvm_stat: Read event values as u64 scripts/kvm/kvm_stat: Cleanup and pre-init perf_event_attr scripts/kvm/kvm_stat: Fix output formatting scripts/kvm/kvm_stat: Make tui function a class scripts/kvm/kvm_stat: Remove unneeded X86_EXIT_REASONS scripts/kvm/kvm_stat: Group arch specific data scripts/kvm/kvm_stat: Cleanup of Event class ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-01-26cpus: use broadcast on qemu_pause_condDr. David Alan Gilbert
Jiri saw a hang on pause_all_vcpus called from postcopy_start, where the cpus are all apparently stopped ('stopped' flag set) but pause_all_vcpus is still stuck on a cond_wait on qemu_paused_cond. We suspect this is happening if a qmp_stop is called at about the same time as the postcopy code calls that pause_all_vcpus; although they both should have the main lock held, Paolo spotted the cond_wait unlocks the global lock so perhaps they both could end up waiting at the same time? Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reported-by: Jiri Denemark <jdenemar@redhat.com> Message-Id: <1453716498-27238-1-git-send-email-dgilbert@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-01-21qom/cpu: Add MemoryRegion propertyPeter Crosthwaite
Add a MemoryRegion property, which if set is used to construct the CPU's initial (default) AddressSpace. Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> [PMM: code is moved from qom/cpu.c to exec.c to avoid having to make qom/cpu.o be a non-common object file; code to use the MemoryRegion and to default it to system_memory added.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21exec.c: Allow target CPUs to define multiple AddressSpacesPeter Maydell
Allow multiple calls to cpu_address_space_init(); each call adds an entry to the cpu->ases array at the specified index. It is up to the target-specific CPU code to actually use these extra address spaces. Since this multiple AddressSpace support won't work with KVM, add an assertion to avoid confusing failures. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-01-21exec.c: Don't set cpu->as until cpu_address_space_initPeter Maydell
Rather than setting cpu->as unconditionally in cpu_exec_init (and then having target-i386 override this later), don't set it until the first call to cpu_address_space_init. This requires us to initialise the address space for both TCG and KVM (KVM doesn't need the AS listener but it does require cpu->as to be set). For target CPUs which don't set up any address spaces (currently everything except i386), add the default address_space_memory in qemu_init_vcpu(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2015-12-17cpu: Convert CpuInfo into flat unionEric Blake
The CpuInfo struct is used only by the 'query-cpus' output command, so we are free to modify it by adding fields (clients are already supposed to ignore unknown output fields), or by changing optional members to mandatory, while still keeping QMP wire compatibility with older versions of qemu. When qapi type CpuInfo was originally created for 0.14, we had no notion of a flat union, and instead just listed a bunch of optional fields with documentation about the mutually-exclusive choice of which instruction pointer field(s) would be provided for a given architecture. But now that we have flat unions and introspection, it is better to segregate off which fields will be provided according to the actual architecture. With this in place, we no longer need the fields to be optional, because the choice of the new 'arch' discriminator serves that role. This has an additional benefit: the old all-in-one struct was the only place in the code base that had a case-sensitive naming of members 'pc' vs. 'PC'. Separating these spellings into different branches of the flat union will allow us to add restrictions against future case-insensitive collisions, since that is generally a poor interface practice. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1447836791-369-25-git-send-email-eblake@redhat.com> [Spelling of CPUInfo{SPARC,PPC,MIPS} fixed] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2015-11-26call bdrv_drain_all() even if the vm is stoppedWen Congyang
There are still I/O operations when the vm is stopped. For example, stop the vm, and do block migration. In this case, we don't drain all I/O operation, and may meet the following problem: qemu-system-x86_64: migration/block.c:731: block_save_complete: Assertion `block_mig_state.submitted == 0' failed. Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Message-Id: <564EE92E.4070701@cn.fujitsu.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-06Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-replay' into ↵Peter Maydell
staging So here it is, let's see what happens. # gpg: Signature made Fri 06 Nov 2015 09:30:34 GMT using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" * remotes/bonzini/tags/for-upstream-replay: replay: recording of the user input replay: command line options replay: replay blockers for devices replay: initialization and deinitialization replay: ptimer bottom halves: introduce bh call function replay: checkpoints icount: improve counting for record/replay replay: shutdown event replay: recording and replaying clock ticks replay: asynchronous events infrastructure replay: interrupts and exceptions cpu: replay instructions sequence cpu-exec: allow temporary disabling icount replay: introduce icount event replay: introduce mutex to protect the replay log replay: internal functions for replay log replay: global variables and function stubs Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-06replay: checkpointsPavel Dovgalyuk
This patch introduces checkpoints that synchronize cpu thread and iothread. When checkpoint is met in the code all asynchronous events from the queue are executed. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162444.8676.52916.stgit@PASHA-ISP.def.inno> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-06icount: improve counting for record/replayPavel Dovgalyuk
icount_warp_rt function is called by qemu_clock_warp and as callback of icount_warp timer. This patch adds call to qemu_clock_warp into main_loop_wait function, because icount warp may be missed in record/replay mode, when CPU is sleeping. This patch also disables of calling this function by timer, because it is not needed after making modifications of main_loop_wait. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162439.8676.38290.stgit@PASHA-ISP.def.inno> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-06replay: recording and replaying clock ticksPavel Dovgalyuk
Clock ticks are considered as the sources of non-deterministic data for virtual machine. This patch implements saving the clock values when they are acquired (virtual, host clock). When replaying the execution corresponding values are read from log and transfered to the module, which wants to read the values. Such a design required the clock polling to be synchronized. Sometimes it is not true - e.g. when timeouts for timer lists are checked. In this case we use a cached value of the clock, passing it to the client code. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162427.8676.36558.stgit@PASHA-ISP.def.inno> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
2015-11-05cpu: replay instructions sequencePavel Dovgalyuk
This patch adds calls to replay functions into the icount setup block. In record mode number of executed instructions is written to the log. In replay mode number of istructions to execute is taken from the replay log. When replayed instructions counter is expired qemu_notify_event() function is called to wake up the iothread. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150917162405.8676.31890.stgit@PASHA-ISP.def.inno> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-11-05Revert "Introduce cpu_clean_all_dirty"Liang Li
This reverts commit de9d61e83d43be9069e6646fa9d57a3f47779d28. Now 'cpu_clean_all_dirty' is useless, we can revert the related code. Conflicts: include/sysemu/kvm.h Signed-off-by: Liang Li <liang.z.li@intel.com> Message-Id: <1446695464-27116-3-git-send-email-liang.z.li@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-10-08s/cpu_get_real_ticks/cpu_get_host_ticks/Christopher Covington
This should help clarify the purpose of the function that returns the host system's CPU cycle count. Signed-off-by: Christopher Covington <cov@codeaurora.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com> ppc portion Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-09-30cpu: Provide vcpu throttling interfaceJason J. Herne
Provide a method to throttle guest cpu execution. CPUState is augmented with timeout controls and throttle start/stop functions. To throttle the guest cpu the caller simply has to call the throttle set function and provide a percentage of throttle time. Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>
2015-09-09cpus: remove tcg_halt_cond and tcg_cpu_thread globalsKONRAD Frederic
This hides the tcg_halt_cond and tcg_cpu_thread global variables inside qemu_tcg_init_vcpu. Multi-threaded TCG will need one QemuCond and one QemuThread per virtual cpu, so it's preferrable to use cpu->halt_cond and cpu->thread. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Message-Id: <1439220437-23957-9-git-send-email-fred.konrad@greensocs.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>