aboutsummaryrefslogtreecommitdiff
path: root/accel
AgeCommit message (Collapse)Author
2019-07-15kvm: Support KVM_CLEAR_DIRTY_LOGPeter Xu
Firstly detect the interface using KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 and mark it. When failed to enable the new feature we'll fall back to the old sync. Provide the log_clear() hook for the memory listeners for both address spaces of KVM (normal system memory, and SMM) and deliever the clear message to kernel. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-11-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15kvm: Introduce slots lock for memory listenerPeter Xu
Introduce KVMMemoryListener.slots_lock to protect the slots inside the kvm memory listener. Currently it is close to useless because all the KVM code path now is always protected by the BQL. But it'll start to make sense in follow up patches where we might do remote dirty bitmap clear and also we'll update the per-slot cached dirty bitmap even without the BQL. So let's prepare for it. We can also use per-slot lock for above reason but it seems to be an overkill. Let's just use this bigger one (which covers all the slots of a single address space) but anyway this lock is still much smaller than the BQL. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-10-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15kvm: Persistent per kvmslot dirty bitmapPeter Xu
When synchronizing dirty bitmap from kernel KVM we do it in a per-kvmslot fashion and we allocate the userspace bitmap for each of the ioctl. This patch instead make the bitmap cache be persistent then we don't need to g_malloc0() every time. More importantly, the cached per-kvmslot dirty bitmap will be further used when we want to add support for the KVM_CLEAR_DIRTY_LOG and this cached bitmap will be used to guarantee we won't clear any unknown dirty bits otherwise that can be a severe data loss issue for migration code. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-9-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15kvm: Update comments for sync_dirty_bitmapPeter Xu
It's obviously obsolete. Do some update. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-Id: <20190603065056.25211-8-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-14tcg: Release mmap_lock on translation faultRichard Henderson
Turn helper_retaddr into a multi-state flag that may now also indicate when we're performing a read on behalf of the translator. In this case, release the mmap_lock before the longjmp back to the main cpu loop, and thereby avoid a failing assert therein. Fixes: https://bugs.launchpad.net/qemu/+bug/1832353 Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-07-14tcg: Introduce set/clear_helper_retaddrRichard Henderson
At present we have a potential error in that helper_retaddr contains data for handle_cpu_signal, but we have not ensured that those stores will be scheduled properly before the operation that may fault. It might be that these races are not in practice observable, due to our use of -fno-strict-aliasing, but better safe than sorry. Adjust all of the setters of helper_retaddr. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-07-05general: Replace global smp variables with smp machine propertiesLike Xu
Basically, the context could get the MachineState reference via call chains or unrecommended qdev_get_machine() in !CONFIG_USER_ONLY mode. A local variable of the same name would be introduced in the declaration phase out of less effort OR replace it on the spot if it's only used once in the context. No semantic changes. Signed-off-by: Like Xu <like.xu@linux.intel.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190518205428.90532-4-like.xu@linux.intel.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-06-21target/i386: kvm: Add support for save and restore nested stateLiran Alon
Kernel commit 8fcc4b5923af ("kvm: nVMX: Introduce KVM_CAP_NESTED_STATE") introduced new IOCTLs to extract and restore vCPU state related to Intel VMX & AMD SVM. Utilize these IOCTLs to add support for migration of VMs which are running nested hypervisors. Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Maran Wilson <maran.wilson@oracle.com> Tested-by: Maran Wilson <maran.wilson@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Message-Id: <20190619162140.133674-9-liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-21KVM: Introduce kvm_arch_destroy_vcpu()Liran Alon
Simiar to how kvm_init_vcpu() calls kvm_arch_init_vcpu() to perform arch-dependent initialisation, introduce kvm_arch_destroy_vcpu() to be called from kvm_destroy_vcpu() to perform arch-dependent destruction. This was added because some architectures (Such as i386) currently do not free memory that it have allocated in kvm_arch_init_vcpu(). Suggested-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Maran Wilson <maran.wilson@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Message-Id: <20190619162140.133674-3-liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-21kvm-all: Add/update fprintf's for kvm_*_ioeventfd_delYury Kotov
Signed-off-by: Yury Kotov <yury-kotov@yandex-team.ru> Message-Id: <20190607090830.18807-1-yury-kotov@yandex-team.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-13Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190612' into stagingPeter Maydell
Fix vector arithmetic right shift helpers. # gpg: Signature made Thu 13 Jun 2019 05:10:11 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190612: tcg: Fix typos in helper_gvec_sar{8,32,64}v Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-06-12tcg: Fix typos in helper_gvec_sar{8,32,64}vRichard Henderson
The loop is written with scalars, not vectors. Use the correct type when incrementing. Fixes: 5ee5c14cacd Reported-by: Laurent Vivier <lvivier@redhat.com> Tested-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-12cputlb: cast size_t to target_ulong before using for address masksAlex Bennée
While size_t is defined to happily access the biggest host object this isn't the case when generating masks for 64 bit guests on 32 bit hosts. Otherwise we end up truncating the address when we fall back to our unaligned helper. Fixes: https://bugs.launchpad.net/qemu/+bug/1831545 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Andrew Randrianasulu <randrianasulu@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-06-12cputlb: use uint64_t for interim values for unaligned loadAlex Bennée
When running on 32 bit TCG backends a wide unaligned load ends up truncating data before returning to the guest. We specifically have the return type as uint64_t to avoid any premature truncation so we should use the same for the interim types. Fixes: https://bugs.launchpad.net/qemu/+bug/1830872 Fixes: eed5664238e Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Igor Mammedov <imammedo@redhat.com>
2019-06-12Include qemu-common.h exactly where neededMarkus Armbruster
No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
2019-06-11qemu-common: Move tcg_enabled() etc. to sysemu/tcg.hMarkus Armbruster
Other accelerators have their own headers: sysemu/hax.h, sysemu/hvf.h, sysemu/kvm.h, sysemu/whpx.h. Only tcg_enabled() & friends sit in qemu-common.h. This necessitates inclusion of qemu-common.h into headers, which is against the rules spelled out in qemu-common.h's file comment. Move tcg_enabled() & friends into their own header sysemu/tcg.h, and adjust #include directives. Cc: Richard Henderson <rth@twiddle.net> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-2-armbru@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [Rebased with conflicts resolved automatically, except for accel/tcg/tcg-all.c]
2019-06-10cpu: Move icount_decr to CPUNegativeOffsetStateRichard Henderson
Amusingly, we had already ignored the comment to keep this value at the end of CPUState. This restores the minimum negative offset from TCG_AREG0 for code generation. For the couple of uses within qom/cpu.c, without NEED_CPU_H, add a pointer from the CPUState object to the IcountDecr object within CPUNegativeOffsetState. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Replace ENV_GET_CPU with env_cpuRichard Henderson
Now that we have both ArchCPU and CPUArchState, we can define this generically instead of via macro in each target's cpu.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Create struct CPUTLBRichard Henderson
Move all softmmu tlb data into this structure. Arrange the members so that we are able to place mask+table together and at a smaller absolute offset from ENV. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Fold CPUTLBWindow into CPUTLBDescRichard Henderson
Both structures are allocated once per mmu_idx. There is no reason for them to be separate. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-22tcg: Add support for vector bitwise selectRichard Henderson
This operation performs d = (b & a) | (c & ~a), and is present on a majority of host vector units. Include gvec expanders. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-16Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190510' into stagingPeter Maydell
Add CPUClass::tlb_fill. Improve tlb_vaddr_to_host for use by ARM SVE no-fault loads. # gpg: Signature made Fri 10 May 2019 19:48:37 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190510: (27 commits) tcg: Use tlb_fill probe from tlb_vaddr_to_host tcg: Remove CPUClass::handle_mmu_fault tcg: Use CPUClass::tlb_fill in cputlb.c target/xtensa: Convert to CPUClass::tlb_fill target/unicore32: Convert to CPUClass::tlb_fill target/tricore: Convert to CPUClass::tlb_fill target/tilegx: Convert to CPUClass::tlb_fill target/sparc: Convert to CPUClass::tlb_fill target/sh4: Convert to CPUClass::tlb_fill target/s390x: Convert to CPUClass::tlb_fill target/riscv: Convert to CPUClass::tlb_fill target/ppc: Convert to CPUClass::tlb_fill target/openrisc: Convert to CPUClass::tlb_fill target/nios2: Convert to CPUClass::tlb_fill target/moxie: Convert to CPUClass::tlb_fill target/mips: Convert to CPUClass::tlb_fill target/mips: Tidy control flow in mips_cpu_handle_mmu_fault target/mips: Pass a valid error to raise_mmu_exception for user-only target/microblaze: Convert to CPUClass::tlb_fill target/m68k: Convert to CPUClass::tlb_fill ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-05-13tcg: Add support for vector absolute valueRichard Henderson
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Add gvec expanders for variable shiftRichard Henderson
The gvec expanders perform a modulo on the shift count. If the target requires alternate behaviour, then it cannot use the generic gvec expanders anyway, and will have to have its own custom code. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10cputlb: Do unaligned store recursion to outermost functionRichard Henderson
This is less tricky than for loads, because we always fall back to single byte stores to implement unaligned stores. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2019-05-10cputlb: Do unaligned load recursion to outermost functionRichard Henderson
If we attempt to recurse from load_helper back to load_helper, even via intermediary, we do not get all of the constants expanded away as desired. But if we recurse back to the original helper (or a shim that has a consistent function signature), the operands are folded away as desired. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2019-05-10cputlb: Drop attribute flattenRichard Henderson
Going to approach this problem via __attribute__((always_inline)) instead, but full conversion will take several steps. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2019-05-10cputlb: Move TLB_RECHECK handling into load/store_helperRichard Henderson
Having this in io_readx/io_writex meant that we forgot to re-compute index after tlb_fill. It also means we can use the normal aligned memory load path. It also fixes a bug in that we had cached a use of index across a tlb_fill. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2019-05-10accel/tcg: demacro cputlbAlex Bennée
Instead of expanding a series of macros to generate the load/store helpers we move stuff into common functions and rely on the compiler to eliminate the dead code for each variant. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2019-05-10tcg: Use tlb_fill probe from tlb_vaddr_to_hostRichard Henderson
Most of the existing users would continue around a loop which would fault the tlb entry in via a normal load/store. But for AArch64 SVE we have an existing emulation bug wherein we would mark the first element of a no-fault vector load as faulted (within the FFR, not via exception) just because we did not have its address in the TLB. Now we can properly only mark it as faulted if there really is no valid, readable translation, while still not raising an exception. (Note that beyond the first element of the vector, the hardware may report a fault for any reason whatsoever; with at least one element loaded, forward progress is guaranteed.) Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10tcg: Remove CPUClass::handle_mmu_faultRichard Henderson
This hook is now completely replaced by tlb_fill. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10tcg: Use CPUClass::tlb_fill in cputlb.cRichard Henderson
We can now use the CPUClass hook instead of a named function. Create a static tlb_fill function to avoid other changes within cputlb.c. This also isolates the asserts within. Remove the named tlb_fill function from all of the targets. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10tcg: Add CPUClass::tlb_fillRichard Henderson
This hook will replace the (user-only mode specific) handle_mmu_fault hook, and the (system mode specific) tlb_fill function. The handle_mmu_fault hook was written as if there was a valid way to recover from an mmu fault, and had 3 possible return states. In reality, the only valid action is to raise an exception, return to the main loop, and deliver the SIGSEGV to the guest. Note that all of the current implementations of handle_mmu_fault for guests which support linux-user do in fact only ever return 1, which is the signal to return to the main loop. Using the hook for system mode requires that all targets be converted, so for now the hook is (optionally) used only from user-only mode. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-02accel: Remove unused AccelClass::available fieldEduardo Habkost
The field is not used anymore, we can remove it. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20190422210448.2488-4-ehabkost@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [on mingw64] Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2019-05-02qtest: Don't compile qtest accel on non-POSIX systemsEduardo Habkost
qtest_available() will always return 0 on non-POSIX systems. It's simpler to just not compile the accelerator code on those systems instead of relying on the AccelClass::available function. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20190422210448.2488-3-ehabkost@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [on mingw64] Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2019-05-02qtest: Move accel code to accel/qtest.cEduardo Habkost
QTest has two parts: the server (-qtest) and the accelerator (-machine accel=qtest). The accelerator depends on CONFIG_POSIX due to its usage of sigwait(), but the server doesn't. Move the accel code to accel/qtest.c. Later we will disable compilation of accel/qtest.c on non-POSIX systems. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20190422210448.2488-2-ehabkost@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [thuth: added fixup for MAINTAINERS file] Signed-off-by: Thomas Huth <thuth@redhat.com>
2019-04-25cputlb: Fix io_readx() to respect the access_typeShahab Vahedi
This change adapts io_readx() to its input access_type. Currently io_readx() treats any memory access as a read, although it has an input argument "MMUAccessType access_type". This results in: 1) Calling the tlb_fill() only with MMU_DATA_LOAD 2) Considering only entry->addr_read as the tlb_addr Buglink: https://bugs.launchpad.net/qemu/+bug/1825359 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Shahab Vahedi <shahab.vahedi@gmail.com> Message-Id: <20190420072236.12347-1-shahab.vahedi@gmail.com> [rth: Remove assert; fix expression formatting.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24tcg: Restart after TB code generation overflowRichard Henderson
If a TB generates too much code, try again with fewer insns. Fixes: https://bugs.launchpad.net/bugs/1824853 Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-24tcg: Hoist max_insns computation to tb_gen_codeRichard Henderson
In order to handle TB's that translate to too much code, we need to place the control of the length of the translation in the hands of the code gen master loop. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-18qom/cpu: Simplify how CPUClass:cpu_dump_state() printsMarkus Armbruster
CPUClass method dump_statistics() takes an fprintf()-like callback and a FILE * to pass to it. Most callers pass fprintf() and stderr. log_cpu_state() passes fprintf() and qemu_log_file. hmp_info_registers() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The callback gets passed around a lot, which is tiresome. The type-punning around monitor_fprintf() is ugly. Drop the callback, and call qemu_fprintf() instead. Also gets rid of the type-punning, since qemu_fprintf() takes NULL instead of the current monitor cast to FILE *. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-15-armbru@redhat.com>
2019-04-18tcg: Simplify how dump_exec_info() printsMarkus Armbruster
dump_exec_info() takes an fprintf()-like callback and a FILE * to pass to it. Its only caller hmp_info_jit() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The type-punning is ugly. Drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-5-armbru@redhat.com>
2019-04-18tcg: Simplify how dump_opcount_info() printsMarkus Armbruster
dump_opcount_info() takes an fprintf()-like callback and a FILE * to pass to it. Its only caller hmp_info_opcount() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The type-punning is ugly. Drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-4-armbru@redhat.com>
2019-04-02accel: Unbreak accelerator fallbackMarkus Armbruster
When the user specifies a list of accelerators, we pick the first one that initializes successfully. Recent commit 1a3ec8c1564 broke that. Reproducer: $ qemu-system-x86_64 --machine accel=xen:tcg xencall: error: Could not obtain handle on privileged command interface: No such file or directory xen be core: xen be core: can't open xen interface can't open xen interface qemu-system-x86_64: failed to initialize Xen: Operation not permitted qemu-system-x86_64: /home/armbru/work/qemu/qom/object.c:436: object_set_accelerator_compat_props: Assertion `!object_compat_props[0]' failed. Root cause: we register accelerator compat properties even when the accelerator fails. The failed assertion is object_set_accelerator_compat_props() telling us off. Fix by calling it only for the accelerator that succeeded. Fixes: 1a3ec8c1564f51628cce10d435a2e22559ea29fd Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20190401090827.20793-6-armbru@redhat.com>
2019-03-22trace-events: Consistently point to docs/devel/tracing.txtMarkus Armbruster
Almost all trace-events point to docs/devel/tracing.txt in a comment right at the beginning. Touch up the ones that don't. [Updated with Markus' new commit description wording. --Stefan] Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190314180929.27722-2-armbru@redhat.com Message-Id: <20190314180929.27722-2-armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-03-11qdev: Fix latent bug with compat_props and onboard devicesMarkus Armbruster
Compatibility properties started life as a qdev property thing: we supported them only for qdev properties, and implemented them with the machinery backing command line option -global. Recent commit fa0cb34d221 put them to use (tacitly) with memory backend objects (subtypes of TYPE_MEMORY_BACKEND). To make that possible, we first moved the work of applying them from the -global machinery into TYPE_DEVICE's .instance_post_init() method device_post_init(), in commits ea9ce8934c5 and b66bbee39f6, then made it available to TYPE_MEMORY_BACKEND's .instance_post_init() method host_memory_backend_post_init() as object_apply_compat_props(), in commit 1c3994f6d2a. Note the code smell: we now have function name starting with object_ in hw/core/qdev.c. It has to be there rather than in qom/, because it calls qdev_get_machine() to find the current accelerator's and machine's compat_props. Turns out calling qdev_get_machine() there is problematic. If we qdev_create() from a machine's .instance_init() method, we call device_post_init() and thus qdev_get_machine() before main() can create "/machine" in QOM. qdev_get_machine() tries to get it with container_get(), which "helpfully" creates it as "container" object, and returns that. object_apply_compat_props() tries to paper over the problem by doing nothing when the value of qdev_get_machine() isn't a TYPE_MACHINE. But the damage is done already: when main() later attempts to create the real "/machine", it fails with "attempt to add duplicate property 'machine' to object (type 'container')", and aborts. Since no machine .instance_init() calls qdev_create() so far, the bug is latent. But since I want to do that, I get to fix the bug first. Observe that object_apply_compat_props() doesn't actually need the MachineState, only its the compat_props member of its MachineClass and AccelClass. This permits a simple fix: register MachineClass and AccelClass compat_props with the object_apply_compat_props() machinery right after these classes get selected. This is actually similar to how things worked before commits ea9ce8934c5 and b66bbee39f6, except we now register much earlier. The old code registered them only after the machine's .instance_init() ran, which would've broken compatibility properties for any devices created there. Cc: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20190308131445.17502-2-armbru@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
2019-03-11accel: Allow to build QEMU without TCG or KVM supportAnthony PERARD
Instead of deny build of QEMU without a default accelerator, simply report an error when the user haven't passed -accel or -machine accel= and TCG and KVM isn't builtin. ./configure already check that at least one accelerator is available. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-05hw/boards: Add a MachineState parameter to kvm_type callbackEric Auger
On ARM, the kvm_type will be resolved by querying the KVMState. Let's add the MachineState handle to the callback so that we can retrieve the KVMState handle. in kvm_init, when the callback is called, the kvm_state variable is not yet set. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Message-id: 20190304101339.25970-5-eric.auger@redhat.com [ppc parts] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-02-14kvm: Add kvm_set_ioeventfd* tracesDr. David Alan Gilbert
Add a couple of traces around the kvm_set_ioeventfd* calls. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Message-Id: <20190212134758.10514-4-dgilbert@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-02-11cputlb: update TLB entry/index after tlb_fillEmilio G. Cota
We are failing to take into account that tlb_fill() can cause a TLB resize, which renders prior TLB entry pointers/indices stale. Fix it by re-doing the TLB entry lookups immediately after tlb_fill. Fixes: 86e1eff8bc ("tcg: introduce dynamic TLB sizing", 2019-01-28) Reported-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190209162745.12668-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-02-07Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190206' into stagingPeter Maydell
Queued accel/tcg patches # gpg: Signature made Wed 06 Feb 2019 03:42:52 GMT # gpg: using RSA key 64DF38E8AF7E215F # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190206: accel/tcg: Consider cluster index in tb_lookup__cpu_state() tcg: add early clober modifier in atomic16_cmpxchg on aarch64 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>