aboutsummaryrefslogtreecommitdiff
path: root/tcg/aarch64
AgeCommit message (Collapse)Author
2016-09-16tcg/aarch64: Add support for fencePranith Kumar
Cc: Claudio Fontana <claudio.fontana@gmail.com> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-4-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-09-16tcg: Support arbitrary size + alignmentRichard Henderson
Previously we allowed fully unaligned operations, but not operations that are aligned but with less alignment than the operation size. In addition, arm32, ia64, mips, and sparc had been omitted from the previous overalignment patch, which would have led to that alignment being enforced. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-07-12tcg: Clean up tcg-target.h header guardsMarkus Armbruster
These use guard symbols like TCG_TARGET_$target. scripts/clean-header-guards.pl doesn't like them because they don't match their file name (they should, to make guard collisions less likely). Clean them up: use guard symbol $target_TCG_TARGET_H for tcg/$target/tcg-target.h. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2016-07-05tcg: Improve the alignment check infrastructureSergey Sorokin
Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
2016-07-05tcg: Optimize spills of constantsRichard Henderson
While we can store constants via constrants on INDEX_op_st_i32 et al, we weren't able to spill constants to backing store. Add a new backend interface, tcg_out_sti, which may store the constant (and is allowed to fail). Rearrange the temp_* helpers so that we only attempt to directly store a constant when the temp is becoming dead/free. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Clean up direct block chaining data fieldsSergey Fedorov
Briefly describe in a comment how direct block chaining is done. It should help in understanding of the following data fields. Rename some fields in TranslationBlock and TCGContext structures to better reflect their purpose (dropping excessive 'tb_' prefix in TranslationBlock but keeping it in TCGContext): tb_next_offset => jmp_reset_offset tb_jmp_offset => jmp_insn_offset tb_next => jmp_target_addr jmp_next => jmp_list_next jmp_first => jmp_list_first Avoid using a magic constant as an invalid offset which is used to indicate that there's no n-th jump generated. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg/aarch64: Make direct jump patching thread-safeSergey Fedorov
Ensure direct jump patching in AArch64 is atomic by using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1461341333-19646-9-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-04-21tcg: check for CONFIG_DEBUG_TCG instead of NDEBUGAurelien Jarno
Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-2-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-04-21tcg: use tcg_debug_assert instead of assert (fix performance regression)Aurelien Jarno
The TCG code is quite performance sensitive, but at the same time can also be quite tricky. That is why asserts that can be enabled with the --enable-debug-tcg configure option. This used to work the following way: | #include "config.h" | | ... | | #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG) | /* define it to suppress various consistency checks (faster) */ | #define NDEBUG | #endif | | ... | | #include <assert.h> Since commit 757e725b (tcg: Clean up includes) "config.h" as been replaced by "qemu/osdep.h" which itself includes <assert.h>. As a consequence the assertions are always enabled, even when using --disable-debug-tcg, causing a performance regression, especially on targets with many registers. For instance on qemu-system-ppc the speed difference is about 15%. tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already uses in some places. This patch replaces all the calls to assert into calss to tcg_debug_assert. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-1-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-02-23tcg: Remove unnecessary osdep.h includes from tcg-target.inc.cPeter Maydell
Commit 757e725b58c57d added a number of #include "qemu/osdep.h" files to the tcg-target.c files (as they were named at the time). These are unnecessary because these files are not standalone C files, and the tcg/tcg.c file which includes them will have already included osdep.h on their behalf. Remove the unneeded include directives. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1456238983-10160-4-git-send-email-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-02-23tcg: Rename tcg-target.c to tcg-target.inc.cPeter Maydell
Rename the per-architecture tcg-target.c files to tcg-target.inc.c. This makes it clearer that they are not intended to be standalone C files, but are instead #included into another source file. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1456238983-10160-2-git-send-email-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-01-29tcg: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-16-git-send-email-peter.maydell@linaro.org
2015-09-02tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0Richard Henderson
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest base to the address base register from the address index register. Except that 31 in the base slot is SP not XZR, so we need to be more intelligent about which reg gets placed in which slot. Cc: qemu-stable@nongnu.org (v2.4.0) Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reported-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier
As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier
All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg/aarch64: Use softmmu fast path for unaligned accessesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson
Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: implement real ext_i32_i64 and extu_i32_i64 opsAurelien Jarno
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a 32-bit value is always converted to a 64-bit value and not propagated through the register allocator or the optimizer. Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23tcg/aarch64: use 32-bit offset for 32-bit softmmu emulationRichard Henderson
Similar to the same fix for user-mode, except this instance occurs on the softmmu path. Again, the tlb addend must be the base register, while the guest address is the index. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23tcg/aarch64: use 32-bit offset for 32-bit user-mode emulationPaolo Bonzini
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and tcg_out_qemu_st to use a 32-bit zero extended offset. However, the guest base register x28 must be the base and addr_reg must be the index. Reported-by: Leon Alrae <leon.alrae@imgtec.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1436974021-28978-3-git-send-email-pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-07-23tcg/aarch64: add ext argument to tcg_out_insn_3310Paolo Bonzini
The new argument lets you pick uxtw or uxtx mode for the offset register. For now, all callers pass TCG_TYPE_I64 so that uxtx is generated. The bits for uxtx are removed from I3312_TO_I3310. Reported-by: Leon Alrae <leon.alrae@imgtec.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1436974021-28978-2-git-send-email-pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-09tcg: Mask TCGMemOp appropriately for indexingRichard Henderson
The addition of MO_AMASK means that places that used inverted masks need to be changed to use positive masks, and places that failed to mask the intended bits need updating. Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-06-03tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini
This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-05-14tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson
The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-05-14tcg: Merge memop and mmu_idx parameters to qemu_ld/stRichard Henderson
At the tcg opcode level, not at the tcg-op.h generator level. This requires minor changes through all of the tcg backends, but none of the cpu translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-03-13tcg: Change generator-side labels to a pointerRichard Henderson
This is less about improved type checking than enabling a subsequent change to the representation of labels. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com> Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-09-29tcg-aarch64: Use 32-bit loads for qemu_ld_i32Richard Henderson
The "old" qemu_ld opcode did not specify the size of the result, and so we had to assume full register width. With the new opcodes, we can narrow the result. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-06-04tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson
Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-28tcg-aarch64: Make debug_frame constRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12tcg: Remove unreachable code in tcg_out_op and op_defsRichard Henderson
The INDEX_op_call case has just been obsoleted; the mov and movi cases have not been reachable for years. Attempt to document this both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT. Because of the TCG_OPF_NOT_PRESENT change, this must be done for all targets in a single commit. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12tcg-aarch64: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson
And use tcg pointer differencing functions as appropriate. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-28tcg: Add INDEX_op_trunc_shr_i32Richard Henderson
Let the backend do something special for truncation. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Use HOST_WORDS_BIGENDIANRichard Henderson
Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg-aarch64: Remove w constraintRichard Henderson
Now redundant with the type parameter to tcg_target_const_match. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Add TCGType parameter to tcg_target_const_matchRichard Henderson
Most 64-bit targets need to be able to ignore the high bits of a TCG_TYPE_I32 value. Suggested-by: Stuart Brady <sdb@zubnet.me.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-18tcg: Fix warning (1 bit signed bitfield entry) and replace int by boolStefan Weil
Static code analyzers complain about signed bitfields with only a single bit. is_ld is used as a boolean value, so make it bool. ppc64 already used bool for the 2nd argument is_ld of the local function add_qemu_ldst_label. Modify all other TCG targets to do follow this example. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Use tcg_out_mov in preference to tcg_out_movrRichard Henderson
It's the more canonical interface. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Prefer unsigned offsets before signed offsets for ldstRichard Henderson
The assembler seems to prefer them, perhaps we should too. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Introduce tcg_out_insn_3312, _3310, _3313Richard Henderson
Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded for the proper shift for the field and was confusing. Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits into a single I3312_* argument, eliminating some magic numbers from the helper functions. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Merge aarch64_ldst_get_data/type into tcg_out_opRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Introduce tcg_out_insn_3507Richard Henderson
Cleaning up the implementation of REV and REV16 at the same time. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Support stores of zeroRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Implement TCG_TARGET_HAS_new_ldstRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Pass qemu_ld/st arguments directlyRichard Henderson
Instead of passing them the "args" array. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Use TCGMemOp in qemu_ld/stRichard Henderson
Making the bswap conditional on the memop instead of a compile-time test. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Use ADR to pass the return address to the ld/st helpersRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Use tcg_out_call for qemu_ld/stRichard Henderson
In some cases, a direct branch will be in range. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Avoid add with zero in tlb loadRichard Henderson
Some guest env are small enough to reach the tlb with only a 12-bit addition. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-04-16tcg-aarch64: Implement tcg_register_jitRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>