aboutsummaryrefslogtreecommitdiff
path: root/tcg
AgeCommit message (Collapse)Author
2016-09-20tcg/i386: Extend TARGET_PAGE_MASK to the proper typeRichard Henderson
TARGET_PAGE_MASK, as defined, has type "int". We need to extend that to the proper target width before oring in an "unsigned". Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg: Optimize fence instructionsPranith Kumar
This commit optimizes fence instructions. Two optimizations are currently implemented: (1) unnecessary duplicate fence instructions, and (2) merging weaker fences into a stronger fence. [rth: Merge tcg_optimize_mb back into tcg_optimize, so that we only loop over the opcode stream once. Merge "unrelated" weaker barriers into one stronger barrier.] Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160823134825.32578-1-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/tci: Add support for fencePranith Kumar
Cc: Stefan Weil <sw@weilnetz.de> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-11-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/sparc: Add support for fencePranith Kumar
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-10-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/s390: Add support for fencePranith Kumar
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-9-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/ppc: Add support for fencePranith Kumar
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-8-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/mips: Add support for fencePranith Kumar
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-7-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/ia64: Add support for fencePranith Kumar
Cc: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-6-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/arm: Add support for fencePranith Kumar
Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-5-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/aarch64: Add support for fencePranith Kumar
Cc: Claudio Fontana <claudio.fontana@gmail.com> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-4-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg/i386: Add support for fencePranith Kumar
Generate a 'lock orl $0,0(%esp)' instruction for ordering instead of mfence which has similar ordering semantics. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-3-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16Introduce TCGOpcode for memory barrierPranith Kumar
This commit introduces the TCGOpcode for memory barrier instruction. This opcode takes an argument which is the type of memory barrier which should be generated. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-2-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg: Support arbitrary size + alignmentRichard Henderson
Previously we allowed fully unaligned operations, but not operations that are aligned but with less alignment than the operation size. In addition, arm32, ia64, mips, and sparc had been omitted from the previous overalignment patch, which would have led to that alignment being enforced. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-15Remove unused function declarationsLadi Prosek
Unused function declarations were found using a simple gcc plugin and manually verified by grepping the sources. Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-15tcg: Remove duplicate header includesThomas Huth
host-utils.h and timer.h are included twice in tcg.c. One time should be enough. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-15Remove remainders of HPPA backendThomas Huth
The HPPA backend has been removed by the following commit: 802b5081233a6b643a8b135a5facaf14bafaa77d tcg-hppa: Remove tcg backend But some small pieces of the HPPA backend still survived until today. Since we also do not have support for a HPPA target in QEMU, we can nowadays safely remove the remaining HPPA parts (like the disassembler code, or the detection of HPPA in the configure script). Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-08-05tcg: Lower indirect registers in a separate passRichard Henderson
Rather than rely on recursion during the middle of register allocation, lower indirect registers to loads and stores off the indirect base into plain temps. For an x86_64 host, with sufficient registers, this results in identical code, modulo the actual register assignments. For an i686 host, with insufficient registers, this means that temps can be (temporarily) spilled to the stack in order to satisfy an allocation. This as opposed to the possibility of not being able to spill, to allocate a register for the indirect base, in order to perform a spill. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Require liveness analysisRichard Henderson
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Include liveness info in the dumpsRichard Henderson
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Compress dead_temps and mem_temps into a single arrayRichard Henderson
We only need two bits per temporary. Fold the two bytes into one, and reduce the memory and cachelines required during compilation. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Fold life data into TCGOpRichard Henderson
Reduce the size of other bitfields to make room. This reduces the cache footprint of compilation. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Reorg TCGOp chainingRichard Henderson
Instead of using -1 as end of chain, use 0, and link through the 0 entry as a fully circular double-linked list. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-08-05tcg: Compress liveness data to 16 bitsRichard Henderson
This reduces both memory usage and per-insn cacheline usage during code generation. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-17compiler: never omit assertions if using a static analysis toolPaolo Bonzini
Assertions help both Coverity and the clang static analyzer avoid false positives, but on the other hand both are confused when the condition is compiled as (void)(x != FOO). Always expand assertion macros when using Coverity or clang, through a new QEMU_STATIC_ANALYSIS preprocessor symbol. This fixes a couple false positives in TCG. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-07-12Clean up decorations and whitespace around header guardsMarkus Armbruster
Cleaned up with scripts/clean-header-guards.pl. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2016-07-12tcg: Clean up tcg-target.h header guardsMarkus Armbruster
These use guard symbols like TCG_TARGET_$target. scripts/clean-header-guards.pl doesn't like them because they don't match their file name (they should, to make guard collisions less likely). Clean them up: use guard symbol $target_TCG_TARGET_H for tcg/$target/tcg-target.h. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2016-07-05tcg: Improve the alignment check infrastructureSergey Sorokin
Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
2016-07-05tcg: Optimize spills of constantsRichard Henderson
While we can store constants via constrants on INDEX_op_st_i32 et al, we weren't able to spill constants to backing store. Add a new backend interface, tcg_out_sti, which may store the constant (and is allowed to fail). Rearrange the temp_* helpers so that we only attempt to directly store a constant when the temp is becoming dead/free. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-05tcg: Fix name for high-half registerRichard Henderson
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-06-20trace: [all] Add "guest_mem_before" eventLluís Vilanova
The event is described in "trace-events". Note that the "MO_AMASK" flag is not traced, since it does not seem to affect the visible semantics of instructions. [s/inline inline/inline/ to fix clang build. --Stefan] Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 146549350711.18437.726780393247474362.stgit@fimbulvetr.bsc.es Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-06-20exec: [tcg] Track which vCPU is performing translation and executionLluís Vilanova
Information is tracked inside the TCGContext structure, and later used by tracing events with the 'tcg' and 'vcpu' properties. The 'cpu' field is used to check tracing of translation-time events ("*_trans"). The 'tcg_env' field is used to pass it to execution-time events ("*_exec"). Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 146549350162.18437.3033661139638458143.stgit@fimbulvetr.bsc.es Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-05-19cpu: move exec-all.h inclusion out of cpu.hPaolo Bonzini
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19exec: extract exec/tb-context.hPaolo Bonzini
TCG backends do not need most of exec-all.h; extract what they actually need to a separate file or move it directly to tcg.h. The next patch will stop including exec-all.h from everywhere. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19qemu-common: push cpu.h inclusion out of qemu-common.hPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-18Fix some typos found by codespellStefan Weil
Signed-off-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-05-12tcg: Clean up from 'next_tb'Sergey Fedorov
The value returned from tcg_qemu_tb_exec() is the value passed to the corresponding tcg_gen_exit_tb() at translation time of the last TB attempted to execute. It is a little confusing to store it in a variable named 'next_tb'. In fact, it is a combination of 4-byte aligned pointer and additional information in its two least significant bits. Break it down right away into two variables named 'last_tb' and 'tb_exit' which are a pointer to the last TB attempted to execute and the TB exit reason, correspondingly. This simplifies the code and improves its readability. Correct a misleading documentation comment for tcg_qemu_tb_exec() and fix logging in cpu_tb_exec(). Also rename a misleading 'next_tb' in another couple of places. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Allow goto_tb to any target PC in user modeSergey Fedorov
In user mode, there's only a static address translation, TBs are always invalidated properly and direct jumps are reset when mapping change. Thus the destination address is always valid for direct jumps and there's no need to restrict it to the pages the TB resides in. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Blue Swirl <blauwirbel@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Clean up direct block chaining safety checksSergey Fedorov
We don't take care of direct jumps when address mapping changes. Thus we must be sure to generate direct jumps so that they always keep valid even if address mapping changes. Luckily, we can only allow to execute a TB if it was generated from the pages which match with current mapping. Document tcg_gen_goto_tb() declaration and note the reason for destination PC limitations. Some targets with variable length instructions allow TB to straddle a page boundary. However, we make sure that both of TB pages match the current address mapping when looking up TBs. So it is safe to do direct jumps into the both pages. Correct the checks for some of those targets. Given that, we can safely patch a TB which spans two pages. Remove the unnecessary check in cpu_exec() and allow such TBs to be patched. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Clean up direct block chaining data fieldsSergey Fedorov
Briefly describe in a comment how direct block chaining is done. It should help in understanding of the following data fields. Rename some fields in TranslationBlock and TCGContext structures to better reflect their purpose (dropping excessive 'tb_' prefix in TranslationBlock but keeping it in TCGContext): tb_next_offset => jmp_reset_offset tb_jmp_offset => jmp_insn_offset tb_next => jmp_target_addr jmp_next => jmp_list_next jmp_first => jmp_list_first Avoid using a magic constant as an invalid offset which is used to indicate that there's no n-th jump generated. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/mips: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in MIPS is atomic by using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-11-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Merged the deposit32 followup.] [rth: Merged the following followup.] Message-Id: <1462210518-26522-1-git-send-email-sergey.fedorov@linaro.org>
2016-05-12tcg/sparc: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in SPARC is atomic by using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1461341333-19646-10-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/aarch64: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in AArch64 is atomic by using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-9-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/arm: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in ARM is atomic by using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-8-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/s390: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in s390 is atomic by: * naturally aligning a location of direct jump address; * using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-7-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/i386: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in i386 is atomic by: * naturally aligning a location of direct jump address; * using atomic_read()/atomic_set() for code patching. tcg_out_nopn() implementation: Suggested-by: Richard Henderson <rth@twiddle.net>. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-6-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/ppc: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in PPC is atomic by: * limiting translation buffer size in 32-bit mode to be addressable by Branch I-form instruction; * using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1461341333-19646-5-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tci: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in TCI is atomic by: * naturally aligning a location of direct jump address; * using atomic_read()/atomic_set() to load/store the address. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-4-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Add tcg_set_insn_paramEdgar E. Iglesias
Add tcg_set_insn_param as a mechanism to modify an insn parameter after emiting the insn. This is useful for icount and also for embedding fault information for a specific insn. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1461931684-1867-2-git-send-email-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-21tcg: check for CONFIG_DEBUG_TCG instead of NDEBUGAurelien Jarno
Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-2-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-21tcg: use tcg_debug_assert instead of assert (fix performance regression)Aurelien Jarno
The TCG code is quite performance sensitive, but at the same time can also be quite tricky. That is why asserts that can be enabled with the --enable-debug-tcg configure option. This used to work the following way: | #include "config.h" | | ... | | #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG) | /* define it to suppress various consistency checks (faster) */ | #define NDEBUG | #endif | | ... | | #include <assert.h> Since commit 757e725b (tcg: Clean up includes) "config.h" as been replaced by "qemu/osdep.h" which itself includes <assert.h>. As a consequence the assertions are always enabled, even when using --disable-debug-tcg, causing a performance regression, especially on targets with many registers. For instance on qemu-system-ppc the speed difference is about 15%. tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already uses in some places. This patch replaces all the calls to assert into calss to tcg_debug_assert. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-1-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>