aboutsummaryrefslogtreecommitdiff
path: root/target-arm/translate.c
AgeCommit message (Collapse)Author
2016-12-20Move target-* CPU file into a target/ folderThomas Huth
We've currently got 18 architectures in QEMU, and thus 18 target-xxx folders in the root folder of the QEMU source tree. More architectures (e.g. RISC-V, AVR) are likely to be included soon, too, so the main folder of the QEMU sources slowly gets quite overcrowded with the target-xxx folders. To disburden the main folder a little bit, let's move the target-xxx folders into a dedicated target/ folder, so that target-xxx/ simply becomes target/xxx/ instead. Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part] Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part] Acked-by: Michael Walle <michael@walle.cc> [lm32 part] Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part] Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part] Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part] Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part] Acked-by: Richard Henderson <rth@twiddle.net> [alpha part] Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part] Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part] Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part] Signed-off-by: Thomas Huth <thuth@redhat.com>
2016-11-01log: Add locking to large logging blocksRichard Henderson
Reuse the existing locking provided by stdio to keep in_asm, cpu, op, op_opt, op_ind, and out_asm as contiguous blocks. While it isn't possible to interleave e.g. in_asm or op_opt logs because of the TB lock protecting all code generation, it is possible to interleave cpu logs, or to interleave a cpu dump with an out_asm dump. For mingw32, we appear to have no viable solution for this. The locking functions are not properly exported from the system runtime library. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26target-arm: remove EXCP_STREX + cpu_exclusive_{test, info}Emilio G. Cota
The exception is not emitted anymore; remove it and the associated TCG variables. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1467054136-10430-31-git-send-email-cota@braap.org>
2016-10-26target-arm: emulate SWP with atomic_xchg helperEmilio G. Cota
Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1467054136-10430-25-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26target-arm: emulate LL/SC using cmpxchg helpersEmilio G. Cota
Emulating LL/SC with cmpxchg is not correct, since it can suffer from the ABA problem. Portable parallel code, however, is written assuming only cmpxchg--and not LL/SC--is available. This means that in practice emulating LL/SC with cmpxchg is a viable alternative. The appended emulates LL/SC pairs in ARM with cmpxchg helpers. This works in both user and system mode. In usermode, it avoids pausing all other CPUs to perform the LL/SC pair. The subsequent performance and scalability improvement is significant, as the plots below show. They plot the throughput of atomic_add-bench compiled for ARM and executed on a 64-core x86 machine. Hi-res plots: http://imgur.com/a/aNQpB atomic_add-bench: 1000000 ops/thread, [0,1] range 9 ++---------+----------+----------+----------+----------+----------+---++ +cmpxchg +-E--+ + + + + + | 8 +Emaster +-H--+ ++ | | | 7 ++E ++ | | | 6 ++++ ++ | | | 5 ++ | ++ 4 ++ | ++ | | | 3 ++ | ++ | | | 2 ++ | ++ |H++E+--- +++ ---+E+------+E+------+E| 1 +++ +E+-----+E+------+E+------+E+------+E+-- +++ +++ ++ ++H+ + +++ + +++ ++++ + + + | 0 ++--H----H-+-----H----+----------+----------+----------+----------+---++ 0 10 20 30 40 50 60 Number of threads atomic_add-bench: 1000000 ops/thread, [0,2] range 16 ++---------+----------+---------+----------+----------+----------+---++ +cmpxchg +-E--+ + + + + + | 14 ++master +-H--+ ++ | | | 12 ++| ++ | E | 10 ++| ++ | | | 8 ++++ ++ |E+| | | | | 6 ++ | ++ | | | 4 ++ | ++ | +E+--- +++ +++ +++ ---+E+------+E| 2 +H+ +E+------E-------+E+-----+E+------+E+------+E+-- +++ + | + +++ + ++++ + + + | 0 ++H-H----H-+-----H----+---------+----------+----------+----------+---++ 0 10 20 30 40 50 60 Number of threads atomic_add-bench: 1000000 ops/thread, [0,128] range 70 ++---------+----------+---------+----------+----------+----------+---++ +cmpxchg +-E--+ + + + ++++ + | 60 ++master +-H--+ ----E------+E+-------++ | -+E+--- +++ +++ +E| | +++ ---- +++ ++| 50 ++ +++ ---+E+- ++ | -E--- | 40 ++ ---+++ ++ | +++--- | | -+E+ | 30 ++ +++---- ++ | +E+ | 20 ++ +++-- ++ | +E+ | |+E+ | 10 +E+ ++ + + + + + + + | 0 +HH-H----H-+-----H----+---------+----------+----------+----------+---++ 0 10 20 30 40 50 60 Number of threads atomic_add-bench: 1000000 ops/thread, [0,1024] range 120 ++---------+---------+----------+---------+----------+----------+---++ +cmpxchg +-E--+ + + + + + | | master +-H--+ ++| 100 ++ ----E+ | +++ ---+E+--- ++| | --E--- +++ | 80 ++ ---- +++ ++ | ---+E+- | 60 ++ -+E+-- ++ | +++ ---- +++ | | -+E+- | 40 ++ +++---- ++ | +++ ---+E+ | | -+E+--- | 20 ++ +E+ ++ |+E+++ | +E+ + + + + + + | 0 +HH-H---H--+-----H---+----------+---------+----------+----------+---++ 0 10 20 30 40 50 60 Number of threads [rth: Enforce alignment for ldrexd.] Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1467054136-10430-23-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-26target-arm: Rearrange aa32 load and store functionsRichard Henderson
Stop specializing on TARGET_LONG_BITS == 32; unconditionally allocate a temp and expand with tcg_gen_extu_i32_tl. Split out gen_aa32_addr, gen_aa32_frob64, gen_aa32_ld_i32 and gen_aa32_st_i32 as separate interfaces. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-10-24target-arm: Implement new HLT trap for semihostingPeter Maydell
Version 2.0 of the semihosting specification introduces new trap instructions for AArch32: HLT 0xF000 for A32 and HLT 0x3C for T32. Implement these (in the same way we implement the existing HLT semihosting trap for A64). The old traps via SVC and BKPT are unaffected. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1476792973-18508-1-git-send-email-peter.maydell@linaro.org
2016-10-17Fix masking of PC lower bits when doing exception returnsPeter Maydell
In commit 9b6a3ea7a699594 store_reg() was changed to mask both bits 0 and 1 of the new PC value when in ARM mode. Unfortunately this broke the exception return code paths when doing a return from ARM mode to Thumb mode: in some of these we write a new CPSR including new Thumb mode bit via gen_helper_cpsr_write_eret(), and then use store_reg() to write the new PC. In this case if the new CPSR specified Thumb mode then masking bit 1 of the PC is incorrect (these code paths correspond to the v8 ARM ARM pseudocode function AArch32.ExceptionReturn(), which always aligns the new PC appropriately for the new instruction set state). Instead of using store_reg() in exception-return code paths, call a new store_pc_exc_ret() which stores the raw new PC value to env->regs[15], and then mask it appropriately in the subsequent helper_cpsr_write_eret() where the new env->thumb state is available. This fixes a bug introduced by 9b6a3ea7a699594 which caused crashes/hangs or otherwise bad behaviour for Linux when userspace was using Thumb. Reported-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1476113163-24578-1-git-send-email-peter.maydell@linaro.org
2016-10-04target-arm: Correctly handle 'sub pc, pc, 1' for ARMv6Peter Maydell
In the ARM v6 architecture, 'sub pc, pc, 1' is not an interworking branch, so the computed new value is written to r15 as a normal value. The architecture says that in this case, bits [1:0] of the value written must be ignored if we are in ARM mode (or bit [0] ignored if in Thumb mode); this is a change from the ARMv4/v5 specification that behaviour is UNPREDICTABLE. Use the correct mask on the PC value when doing a non-interworking store to PC. A popular library used on RaspberryPi uses this instruction as part of a trick to determine whether it is running on ARMv6 or ARMv7, and we were mishandling the sequence. Fixes bug: https://bugs.launchpad.net/bugs/1625295 Reported-by: <stu.axon@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1474380941-4730-1-git-send-email-peter.maydell@linaro.org
2016-09-16target-arm: Generate fences in ARMv7 frontendPranith Kumar
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-12-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-06-20exec: [tcg] Track which vCPU is performing translation and executionLluís Vilanova
Information is tracked inside the TCGContext structure, and later used by tracing events with the 'tcg' and 'vcpu' properties. The 'cpu' field is used to check tracing of translation-time events ("*_trans"). The 'tcg_env' field is used to pass it to execution-time events ("*_exec"). Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 146549350162.18437.3033661139638458143.stgit@fimbulvetr.bsc.es Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-06-14target-arm: Don't permit ARMv8-only Neon insns on ARMv7Peter Maydell
The Neon instructions VCVTA, VCVTM, VCVTN, VCVTP, VRINTA, VRINTM, VRINTN, VRINTP, VRINTX, and VRINTZ were only introduced with ARMv8, so they need a guard to make them UNDEF if the CPU only supports ARMv7. (We got this right for all the other new-in-v8 insns, but forgot it for these Neon 2-reg-misc ops.) Reported-by: Christophe Lyon <christophe.lyon@linaro.org> Tested-by: Christophe Lyon <christophe.lyon@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1465492511-9333-1-git-send-email-peter.maydell@linaro.org
2016-06-06target-arm: A64: Create Instruction Syndromes for Data AbortsEdgar E. Iglesias
Add support for generating the ISS (Instruction Specific Syndrome) for Data Abort exceptions taken from AArch64. These syndromes are used by hypervisors for example to trap and emulate memory accesses. We save the decoded data out-of-band with the TBs at translation time. When exceptions hit, the extra data attached to the TB is used to recreate the state needed to encode instruction syndromes. This avoids the need to emit moves with every load/store. Based on a suggestion from Peter Maydell. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1462464601-10888-2-git-send-email-edgar.iglesias@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-05-19cpu: move exec-all.h inclusion out of cpu.hPaolo Bonzini
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-12tcg: Allow goto_tb to any target PC in user modeSergey Fedorov
In user mode, there's only a static address translation, TBs are always invalidated properly and direct jumps are reset when mapping change. Thus the destination address is always valid for direct jumps and there's no need to restrict it to the pages the TB resides in. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Blue Swirl <blauwirbel@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-05-12tcg: Clean up direct block chaining safety checksSergey Fedorov
We don't take care of direct jumps when address mapping changes. Thus we must be sure to generate direct jumps so that they always keep valid even if address mapping changes. Luckily, we can only allow to execute a TB if it was generated from the pages which match with current mapping. Document tcg_gen_goto_tb() declaration and note the reason for destination PC limitations. Some targets with variable length instructions allow TB to straddle a page boundary. However, we make sure that both of TB pages match the current address mapping when looking up TBs. So it is safe to do direct jumps into the both pages. Correct the checks for some of those targets. Given that, we can safely patch a TB which spans two pages. Remove the unnecessary check in cpu_exec() and allow such TBs to be patched. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-03-22target-arm: dfilter support for in_asmAlex Bennée
Each individual architecture needs to use the qemu_log_in_addr_range() feature for enabling in_asm output as it is part of the frontend. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <1458052224-9316-9-git-send-email-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-16target-arm: Implement MRS (banked) and MSR (banked) instructionsPeter Maydell
Starting with the ARMv7 Virtualization Extensions, the A32 and T32 instruction sets provide instructions "MSR (banked)" and "MRS (banked)" which can be used to access registers for a mode other than the current one: * R<m>_<mode> * ELR_hyp * SPSR_<mode> Implement the missing instructions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1456762734-23939-1-git-send-email-peter.maydell@linaro.org
2016-03-04target-arm: Only trap SRS from S-EL1 if specified mode is MONRalf-Philipp Weinmann
Commit cbc0326b6fb9 caused SRS instructions executed from Secure EL1 to trap to EL3 even if the specified mode was not monitor mode. According to the ARMv8 Architecture reference manual [F6.1.203], ALL of the following conditions need to be met for SRS to trap to EL3: * It is executed at Secure PL1. * The specified mode is monitor mode. * EL3 is using AArch64. Correct the condition governing the trap to EL3 to check the specified mode. Signed-off-by: Ralf-Philipp Weinmann <ralf+devel@comsecuris.com> Message-id: 20160222224251.GA11654@beta.comsecuris.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: tweaked comment text to read 'specified mode'; edited commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: implement BE32 mode in system emulationPaolo Bonzini
System emulation only has a little-endian target; BE32 mode is implemented by adjusting the low bits of the address for every byte and halfword load and store. 64-bit accesses flip the low and high words. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [PC changes: * rebased against master (Jan 2016) ] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: implement setendPaolo Bonzini
Since this is not a high-performance path, just use a helper to flip the E bit and force a lookup in the hash table since the flags have changed. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: introduce tbflag for endiannessPeter Crosthwaite
Introduce a tbflags for endianness, set based upon the CPUs current endianness. This in turn propagates through to the disas endianness flag. Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: introduce disas flag for endiannessPaolo Bonzini
Introduce a disas flag for setting the CPU data endianness. This allows control of the endianness from the CPU state rather than hard-coding it to TARGET_WORDS_BIGENDIAN. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [ PC changes: * Split off as new patch from original: "target-arm: introduce tbflag for CPSR.E" * Wrote commit message from scratch ] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: pass DisasContext to gen_aa32_ld*/st*Paolo Bonzini
We'll need the DisasContext in the next patch to retrieve the desired endianness, so pass it as a whole to gen_aa32_ld*/st*. Unfortunately we cannot let those functions call get_mem_index, because of user-mode load/store instructions. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [ PC changes: * Fix long lines ] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-04target-arm: implement SCTLR.B, drop bswap_codePaolo Bonzini
bswap_code is a CPU property of sorts ("is the iside endianness the opposite way round to TARGET_WORDS_BIGENDIAN?") but it is not the actual CPU state involved here which is SCTLR.B (set for BE32 binaries, clear for BE8). Replace bswap_code with SCTLR.B, and pass that to arm_ld*_code. The next patches will make data fetches honor both SCTLR.B and CPSR.E appropriately. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [PC changes: * rebased on master (Jan 2016) * s/TARGET_USER_ONLY/CONFIG_USER_ONLY * Use bswap_code() for disas_set_info() instead of raw sctlr_b ] Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-03-01tcg: Add type for vCPU pointersLluís Vilanova
Adds the 'TCGv_env' type for pointers to 'CPUArchState' objects. The tracing infrastructure later needs to differentiate between regular pointers and pointers to vCPUs. Also changes all targets to use the new 'TCGv_env' type instead of the generic 'TCGv_ptr'. As of now, the change is merely cosmetic ('TCGv_env' translates into 'TCGv_ptr'), but that could change in the future to enforce the difference. Note that a 'TCGv_env' type (for 'CPUState') is not added, since all helpers currently receive the architecture-specific pointer ('CPUArchState'). Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Acked-by: Richard Henderson <rth@twiddle.net> Message-id: 145641859552.30295.7821536833590725201.stgit@localhost Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-02-26target-arm: Give CPSR setting on 32-bit exception return its own helperPeter Maydell
The rules for setting the CPSR on a 32-bit exception return are subtly different from those for setting the CPSR via an instruction like MSR or CPS. (In particular, in Hyp mode changing the mode bits is not valid via MSR or CPS.) Split the exception-return case into its own helper for setting CPSR, so we can eventually handle them differently in the helper function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1455556977-3644-2-git-send-email-peter.maydell@linaro.org
2016-02-18target-arm: UNDEF in the UNPREDICTABLE SRS-from-System casePeter Maydell
Make get_r13_banked() raise an exception at runtime for the corner case of SRS from System mode, so that we can UNDEF it; this brings us in to line with the ARM ARM's set of permitted CONSTRAINED UNPREDICTABLE choices. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-02-18target-arm: Clean up trap/undef handling of SRSPeter Maydell
The SRS instruction is: * UNDEFINED in Hyp mode * UNPREDICTABLE in User or System mode * UNPREDICTABLE if the specified mode isn't accessible * trapped to EL3 if EL3 is AArch64 and we are at Secure EL1 Clean up the code to handle all these cases cleanly, including picking UNDEF as our choice of UNPREDICTABLE behaviour rather blindly trusting the mode field passed in the instruction. As part of this, move the check for IS_USER into gen_srs() itself rather than having it done by the caller. The exception is that we don't UNDEF for calls from System mode, which need a runtime check. This will be dealt with in the following commits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-02-11target-arm: Fix IL bit reported for Thumb VFP and Neon trapsPeter Maydell
All Thumb Neon and VFP instructions are 32 bits, so the IL bit in the syndrome register should be set. Pass false to the syn_* function's is_16bit argument rather than s->thumb so we report the correct IL bit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1454683067-16001-4-git-send-email-peter.maydell@linaro.org
2016-02-11target-arm: Fix IL bit reported for Thumb coprocessor trapsPeter Maydell
All Thumb coprocessor instructions are 32 bits, so the IL bit in the syndrome register should be set. Pass false to the syn_* function's is_16bit argument rather than s->thumb so we report the correct IL bit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1454683067-16001-3-git-send-email-peter.maydell@linaro.org
2016-02-11target-arm: Add isread parameter to CPAccessFnsPeter Maydell
System registers might have access requirements which need to be described via a CPAccessFn and which differ for reads and writes. For this to be possible we need to pass the access function a parameter to tell it whether the access being checked is a read or a write. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1454506721-11843-6-git-send-email-peter.maydell@linaro.org
2016-02-09tcg: Change tcg_global_mem_new_* to take a TCGv_ptrRichard Henderson
Thus, use cpu_env as the parameter, not TCG_AREG0 directly. Update all uses in the translators. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-02-09tcg: Remove lingering references to gen_opc_bufRichard Henderson
Three in comments and one in code in the stub tcg_liveness_analysis. Signed-off-by: Richard Henderson <rth@twiddle.net>
2016-02-03log: do not unnecessarily include qom/cpu.hPaolo Bonzini
Split the bits that require it to exec/log.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-id: 1452174932-28657-8-git-send-email-den@openvz.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-01-18target-arm: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 1449505425-32022-3-git-send-email-peter.maydell@linaro.org
2015-12-17target-arm: Fix and improve AA32 singlestep translation completion codeSergey Fedorov
The AArch32 translation completion code for singlestep enabled/active case was a way more confusing and too repetitive then it needs to be. Probably that was the cause for a bug to be introduced into it at some point. The bug was that SWI/HVC/SMC exception would be generated in condition-failed instruction code path whereas it shouldn't. This patch rewrites the code in a way similar to the non-singlestep case. In the condition-passed/unconditional instruction code path we need to: - Write the condexec bits back to the CPU state - Advance the singlestep state machine and generate a corresponding exception in case of SWI/HVC/SMC - Write the PC back to the CPU state if it hasn't already been written and generate an appropriate singlestep exception otherwise In the condition-failed instruction code path we need to: - Set a TCG label to jump to it if the condition is failed - Write the condexec bits back to the CPU state - Write the PC back to the CPU state since it hasn't been written in this case - Generate an appropriate singlestep exception Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1448474560-22475-1-git-send-email-serge.fdrv@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-12-17target-arm: raise exception on misaligned LDREX operandsAndrew Baumann
Qemu does not generally perform alignment checks. However, the ARM ARM requires implementation of alignment exceptions for a number of cases including LDREX, and Windows-on-ARM relies on this. This change adds plumbing to enable alignment checks on loads using MO_ALIGN, a do_unaligned_access hook to raise the exception (data abort), and uses the new aligned loads in LDREX (for all but single-byte loads). Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com> Message-id: 1449167808-5656-1-git-send-email-Andrew.Baumann@microsoft.com [PMM: set WnR bits in syndrome and FSR as appropriate] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-19target-arm: Update condexec before arch BP check in AA32 translationSergey Fedorov
Architectural breakpoint check could raise an exceptions, thus condexec bits should be updated before calling gen_helper_check_breakpoints(). Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1447767527-21268-3-git-send-email-serge.fdrv@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-19target-arm: Update condexec before CP access check in AA32 translationSergey Fedorov
Coprocessor access instructions are allowed inside IT block. gen_helper_access_check_cp_reg() can raise an exceptions thus condexec bits should be updated before. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1447767527-21268-2-git-send-email-serge.fdrv@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-12target-arm: Update PC before calling gen_helper_check_breakpoints()Sergey Fedorov
PC should be updated in the CPU state before calling check_breakpoints() helper. Otherwise, the helper would not see the correct PC in the CPU state if it is not at the start of a TB. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1447176222-16401-1-git-send-email-serge.fdrv@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-10target-arm: Clean up DISAS_UPDATE usage in AArch32 translation codeSergey Fedorov
AArch32 translation code does not distinguish between DISAS_UPDATE and DISAS_JUMP. Thus, we cannot use any of them without first updating PC in CPU state. Furthermore, it is too complicated to update PC in CPU state before PC gets updated in disas context. So it is hardly possible to correctly end TB early if is is not likely to be executed before calling disas_*_insn(), e.g. just after calling breakpoint check helper. Modify DISAS_UPDATE and DISAS_JUMP usage in AArch32 translation and apply to them the same semantic as AArch64 translation does: - DISAS_UPDATE: update PC in CPU state when finishing translation - DISAS_JUMP: preserve current PC value in CPU state when finishing translation This patch fixes a bug in AArch32 breakpoint handling: when check_breakpoints helper does not generate an exception, ending the TB early with DISAS_UPDATE couldn't update PC in CPU state and execution hangs. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Message-id: 1447097859-586-1-git-send-email-serge.fdrv@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-11-03target-arm: Report S/NS status in the CPU debug logsPeter Maydell
If this CPU supports EL3, enhance the printing of the current CPU mode in debug logging to distinguish S from NS modes as appropriate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1445883178-576-3-git-send-email-peter.maydell@linaro.org
2015-10-28target-*: Advance pc after recognizing a breakpointRichard Henderson
Some targets already had this within their logic, but make sure it's present for all targets. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-27target-arm/translate.c: Handle non-executable page-straddling Thumb insnsPeter Maydell
When the memory we're trying to translate code from is not executable we have to turn this into a guest fault. In order to report the correct PC for this fault, and to make sure it is not reported until after any other possible faults for instructions earlier in execution, we must terminate TBs at the end of a page, in case the next instruction is in a non-executable page. This is simple for T16, A32 and A64 instructions, which are always aligned to their size. However T32 instructions may be 32-bits but only 16-aligned, so they can straddle a page boundary. Correct the condition that checks whether the next instruction will touch the following page, to ensure that if we're 2 bytes before the boundary and this insn is T32 then we end the TB. Reported-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-10-16target-arm: Fix CPU breakpoint handlingSergey Fedorov
A QEMU breakpoint match is not definitely an architectural breakpoint match. If an exception is generated unconditionally during translation, it is hardly possible to ignore it in the debug exception handler. Generate a call to a helper to check CPU breakpoints and raise an exception only if any breakpoint matches architecturally. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-10-16target-arm: Break the TB after ISB to execute self-modified code correctlySergey Sorokin
If any store instruction writes the code inside the same TB after this store insn, the execution of the TB must be stopped to execute new code correctly. As described in ARMv8 manual D3.4.6 self-modifying code must do an IC invalidation to be valid, and an ISB after it. So it's enough to end the TB after ISB instruction on the code translation. Also this TB break is necessary to take any pending interrupts immediately after an ISB (as required by ARMv8 ARM D1.14.4). Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> [PMM: tweaked commit message and comments slightly] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-10-07tcg: Remove gen_intermediate_code_pcRichard Henderson
It is no longer used, so tidy up everything reached by it. This includes the gen_opc_* arrays, the search_pc parameter and the inline gen_intermediate_code_internal functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07tcg: Pass data argument to restore_state_to_opcRichard Henderson
The gen_opc_* arrays are already redundant with the data stored in the insn_start arguments. Transition restore_state_to_opc to use data from the latter. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-10-07tcg: Add TCG_MAX_INSNSRichard Henderson
Adjust all translators to respect it. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>