aboutsummaryrefslogtreecommitdiff
path: root/target/arm/op_helper.c
AgeCommit message (Collapse)Author
2019-07-30target/arm: Deliver BKPT/BRK exceptions to correct exception levelPeter Maydell
Most Arm architectural debug exceptions (eg watchpoints) are ignored if the configured "debug exception level" is below the current exception level (so for example EL1 can't arrange to get debug exceptions for EL2 execution). Exceptions generated by the BRK or BPKT instructions are a special case -- they must always cause an exception, so if we're executing above the debug exception level then we must take them to the current exception level. This fixes a bug where executing BRK at EL2 could result in an exception being taken at EL1 (which is strictly forbidden by the architecture). Fixes: https://bugs.launchpad.net/qemu/+bug/1838277 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190730132522.27086-1-peter.maydell@linaro.org
2019-07-04target/arm: Move debug routines to debug_helper.cPhilippe Mathieu-Daudé
These routines are TCG specific. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701194942.10092-2-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Move TLB related routines to tlb_helper.cPhilippe Mathieu-Daudé
These routines are TCG specific. The arm_deliver_fault() function is only used within the new helper. Make it static. Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-13-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Move the DC ZVA helper into op_helperSamuel Ortiz
Those helpers are a software implementation of the ARM v8 memory zeroing op code. They should be moved to the op helper file, which is going to eventually be built only when TCG is enabled. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Robert Bradford <robert.bradford@intel.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-10-philmd@redhat.com [PMD: Rebased] Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-01target/arm: Fix multiline comment syntaxPhilippe Mathieu-Daudé
Since commit 8c06fbdf36b checkpatch.pl enforce a new multiline comment syntax. Since we'll move this code around, fix its style first. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190701132516.26392-8-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-06-10target/arm: Use env_cpu, env_archcpuRichard Henderson
Cleanup in the boilerplate that each target must define. Replace arm_env_get_cpu with env_archcpu. The combination CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin; use env_cpu now. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10target/arm: Convert to CPUClass::tlb_fillRichard Henderson
Cc: qemu-arm@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-03-05target/arm: Add set/clear_pstate_bits, share gen_ss_advanceRichard Henderson
We do not need an out-of-line helper for manipulating bits in pstate. While changing things, share the implementation of gen_ss_advance. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-03-05target/arm: Split helper_msr_i_pstate into 3Richard Henderson
The EL0+UMA check is unique to DAIF. While SPSel had avoided the check by nature of already checking EL >= 1, the other post v8.0 extensions to MSR (imm) allow EL0 and do not require UMA. Avoid the unconditional write to pc and use raise_exception_ra to unwind. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190301200501.16533-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Move helper_exception_return to helper-a64.cRichard Henderson
This function is only used by AArch64. Code movement only. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190108223129.5570-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-01-21target/arm: Introduce raise_exception_raRichard Henderson
This path uses cpu_loop_exit_restore to unwind current processor state. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190108223129.5570-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-12-13target/arm: Use arm_hcr_el2_eff more placesRichard Henderson
Since arm_hcr_el2_eff includes a check against arm_is_secure_below_el3, we can often remove a nearby check against secure state. In some cases, sort the call to arm_hcr_el2_eff to the end of a short-circuit logical sequence. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181210150501.7990-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-11-19target/arm: fix smc incorrectly trapping to EL3 when secure is offLuc Michel
This commit fixes a case where the CPU would try to go to EL3 when executing an smc instruction, even though ARM_FEATURE_EL3 is false. This case is raised when the PSCI conduit is set to smc, but the smc instruction does not lead to a valid PSCI call. QEMU crashes with an assertion failure latter on because of incoherent mmu_idx. This commit refactors the pre_smc helper by enumerating all the possible way of handling an scm instruction, and covering the previously missing case leading to the crash. The following minimal test would crash before this commit: .global _start .text _start: ldr x0, =0xdeadbeef ; invalid PSCI call smc #0 run with the following command line: aarch64-linux-gnu-gcc -nostdinc -nostdlib -Wl,-Ttext=40000000 \ -o test test.s qemu-system-aarch64 -M virt,virtualization=on,secure=off \ -cpu cortex-a57 -kernel test Signed-off-by: Luc Michel <luc.michel@greensocs.com> Message-id: 20181117160213.18995-1-luc.michel@greensocs.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-11-13target/arm: Hyp mode R14 is shared with User and SystemPeter Maydell
Hyp mode is an exception to the general rule that each AArch32 mode has its own r13, r14 and SPSR -- it has a banked r13 and SPSR but shares its r14 with User and System mode. We were incorrectly implementing it as banked, which meant that on entry to Hyp mode r14 was 0 rather than the USR/SYS r14. We provide a new function r14_bank_number() which is like the existing bank_number() but provides the index into env->banked_r14[]; bank_number() provides the index to use for env->banked_r13[] and env->banked_cpsr[]. All the points in the code that were using bank_number() to index into env->banked_r14[] are updated for consintency: * switch_mode() -- this is the only place where we fix an actual bug * aarch64_sync_32_to_64() and aarch64_sync_64_to_32(): no behavioural change as we already special-cased Hyp R14 * kvm32.c: no behavioural change since the guest can't ever be in Hyp mode, but conceptually the right thing to do * msr_banked()/mrs_banked(): we can never get to the case that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP, so no behavioural change Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20181109173553.22341-2-peter.maydell@linaro.org
2018-10-24target/arm: New utility function to extract EC from syndromePeter Maydell
Create and use a utility function to extract the EC field from a syndrome, rather than open-coding the shift. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
2018-10-16target/arm: Fix aarch64_sve_change_el wrt EL0Richard Henderson
At present we assert: arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed. The comment in arm_el_is_aa64 explains why asking about EL0 without extra information is impossible. Add an extra argument to provide it from the surrounding context. Fixes: 0ab5953b00b3 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181008212205.17752-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-10-08target/arm: Add v8M stack limit checks on NS function callsPeter Maydell
Check the v8M stack limits when pushing the frame for a non-secure function call via BLXNS. In order to be able to generate the exception we need to promote raise_exception() from being local to op_helper.c so we can call it from helper.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-8-peter.maydell@linaro.org
2018-10-08target/arm: Add v8M stack checks on ADD/SUB/MOV of SPPeter Maydell
Add code to insert calls to a helper function to do the stack limit checking when we handle these forms of instruction that write to SP: * ADD (SP plus immediate) * ADD (SP plus register) * SUB (SP minus immediate) * SUB (SP minus register) * MOV (register) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181002163556.10279-5-peter.maydell@linaro.org
2018-10-08target/arm: Handle SVE vector length changes in system modeRichard Henderson
SVE vector length can change when changing EL, or when writing to one of the ZCR_ELn registers. For correctness, our implementation requires that predicate bits that are inaccessible are never set. Which means noticing length changes and zeroing the appropriate register bits. Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20181005175350.30752-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-20target/arm: Permit accesses to ELR_Hyp from Hyp mode via MSR/MRS (banked)Peter Maydell
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp from either Monitor or Hyp mode. Our translate time check was overly strict and only permitted access from Monitor mode. The runtime check we do in msr_mrs_banked_exc_checks() had the correct code in it, but never got there because of the earlier "currmode == tgtmode" check. Special case ELR_Hyp. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Luc Michel <luc.michel@greensocs.com> Message-id: 20180814124254.5229-9-peter.maydell@linaro.org
2018-08-14target/arm: Honour HCR_EL2.TGE when raising synchronous exceptionsPeter Maydell
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then exceptions targeting NS EL1 must be redirected to EL2. Implement this in raise_exception() -- all synchronous exceptions go through this function. (Asynchronous exceptions go via arm_cpu_exec_interrupt(), which already honours HCR_EL2.TGE when it determines the target EL in arm_phys_excp_target_el().) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
2018-04-26target/arm: Add pre-EL change hooksAaron Lindsay
Because the design of the PMU requires that the counter values be converted between their delta and guest-visible forms for mode filtering, an additional hook which occurs before the EL is changed is necessary. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-8-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-11icount: fix cpu_restore_state_from_tb for non-tb-exit casesPavel Dovgalyuk
In icount mode, instructions that access io memory spaces in the middle of the translation block invoke TB recompilation. After recompilation, such instructions become last in the TB and are allowed to access io memory spaces. When the code includes instruction like i386 'xchg eax, 0xffffd080' which accesses APIC, QEMU goes into an infinite loop of the recompilation. This instruction includes two memory accesses - one read and one write. After the first access, APIC calls cpu_report_tpr_access, which restores the CPU state to get the current eip. But cpu_restore_state_from_tb resets the cpu->can_do_io flag which makes the second memory access invalid. Therefore the second memory access causes a recompilation of the block. Then these operations repeat again and again. This patch moves resetting cpu->can_do_io flag from cpu_restore_state_from_tb to cpu_loop_exit* functions. It also adds a parameter for cpu_restore_state which controls restoring icount. There is no need to restore icount when we only query CPU state without breaking the TB. Restoring it in such cases leads to the incorrect flow of the virtual time. In most cases new parameter is true (icount should be recalculated). But there are two cases in i386 and openrisc when the CPU state is only queried without the need to break the TB. This patch fixes both of these cases. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <20180409091320.12504.35329.stgit@pasha-VirtualBox> [rth: Make can_do_io setting unconditional; move from cpu_exec; make cpu_loop_exit_{noexc,restore} call cpu_loop_exit.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-03-23target/arm: Always set FAR to a known unknown value for debug exceptionsPeter Maydell
For debug exceptions due to breakpoints or the BKPT instruction which are taken to AArch32, the Fault Address Register is architecturally UNKNOWN. We were using that as license to simply not set env->exception.vaddress, but this isn't correct, because it will expose to the guest whatever old value was in that field when arm_cpu_do_interrupt_aarch32() writes it to the guest IFSR. That old value might be a FAR for a previous guest EL2 or secure exception, in which case we shouldn't show it to an EL1 or non-secure exception handler. It might also be a non-deterministic value, which is bad for record-and-replay. Clear env->exception.vaddress before taking breakpoint debug exceptions, to avoid this minor information leak. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180320134114.30418-5-peter.maydell@linaro.org
2018-03-23target/arm: Set FSR for BKPT, BRK when raising exceptionPeter Maydell
Now that we have a helper function specifically for the BRK and BKPT instructions, we can set the exception.fsr there rather than in arm_cpu_do_interrupt_aarch32(). This allows us to use our new arm_debug_exception_fsr() helper. In particular this fixes a bug where we were hardcoding the short-form IFSR value, which is wrong if the target exception level has LPAE enabled. Fixes: https://bugs.launchpad.net/qemu/+bug/1756927 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180320134114.30418-4-peter.maydell@linaro.org
2018-03-23target/arm: Factor out code to calculate FSR for debug exceptionsPeter Maydell
When a debug exception is taken to AArch32, it appears as a Prefetch Abort, and the Instruction Fault Status Register (IFSR) must be set. The IFSR has two possible formats, depending on whether LPAE is in use. Factor out the code in arm_debug_excp_handler() which picks an FSR value into its own utility function, update it to use arm_fi_to_lfsc() and arm_fi_to_sfsc() rather than hard-coded constants, and use the correct condition to select long or short format. In particular this fixes a bug where we could select the short format because we're at EL0 and the EL1 translation regime is not using LPAE, but then route the debug exception to EL2 because of MDCR_EL2.TDE and hand EL2 the wrong format FSR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180320134114.30418-3-peter.maydell@linaro.org
2018-03-23target/arm: Honour MDCR_EL2.TDE when routing exceptions due to BKPT/BRKPeter Maydell
The MDCR_EL2.TDE bit allows the exception level targeted by debug exceptions to be set to EL2 for code executing at EL0. We handle this in the arm_debug_target_el() function, but this is only used for hardware breakpoint and watchpoint exceptions, not for the exception generated when the guest executes an AArch32 BKPT or AArch64 BRK instruction. We don't have enough information for a translate-time equivalent of arm_debug_target_el(), so instead make BKPT and BRK call a special purpose helper which can do the routing, rather than the generic exception_with_syndrome helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180320134114.30418-2-peter.maydell@linaro.org
2018-01-26Merge remote-tracking branch ↵Peter Maydell
'remotes/vivier/tags/m68k-for-2.12-pull-request' into staging # gpg: Signature made Thu 25 Jan 2018 15:15:03 GMT # gpg: using RSA key 0xF30C38BD3F2FBE3C # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" # gpg: aka "Laurent Vivier <laurent@vivier.eu>" # gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C * remotes/vivier/tags/m68k-for-2.12-pull-request: target/m68k: add HMP command "info tlb" target/m68k: add pflush/ptest target/m68k: add moves target/m68k: add index parameter to gen_load()/gen_store() and Co. target/m68k: add Transparent Translation target/m68k: add MC68040 MMU accel/tcg: add size paremeter in tlb_fill() target/m68k: fix TCG variable double free Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25accel/tcg: add size paremeter in tlb_fill()Laurent Vivier
The MC68040 MMU provides the size of the access that triggers the page fault. This size is set in the Special Status Word which is written in the stack frame of the access fault exception. So we need the size in m68k_cpu_unassigned_access() and m68k_cpu_handle_mmu_fault(). To be able to do that, this patch modifies the prototype of handle_mmu_fault handler, tlb_fill() and probe_write(). do_unassigned_access() already includes a size parameter. This patch also updates handle_mmu_fault handlers and tlb_fill() of all targets (only parameter, no code change). Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20180118193846.24953-2-laurent@vivier.eu>
2018-01-25target/arm: Use pointers in neon tbl helperRichard Henderson
Rather than passing a regno to the helper, pass pointers to the vector register directly. This eliminates the need to pass in the environment pointer and reduces the number of places that directly access env->vfp.regs[]. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180119045438.28582-5-richard.henderson@linaro.org Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-16target/arm: Handle page table walk load failures correctlyPeter Maydell
Instead of ignoring the response from address_space_ld*() (indicating an attempt to read a page table descriptor from an invalid physical address), use it to report the failure correctly. Since this is another couple of locations where we need to decide the value of the ARMMMUFaultInfo ea bit based on a MemTxResult, we factor out that operation into a helper function. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-12-27target/*helper: don't check retaddr before calling cpu_restore_stateAlex Bennée
cpu_restore_state officially supports being passed an address it can't resolve the state for. As a result the checks in the helpers are superfluous and can be removed. This makes the code consistent with other users of cpu_restore_state. Of course this does nothing to address what to do if cpu_restore_state can't resolve the state but so far it seems this is handled elsewhere. The change was made with included coccinelle script. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [rth: Fixed up comment indentation. Added second hunk to script to combine cpu_restore_state and cpu_loop_exit.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-13target/arm: Remove fsr argument from get_phys_addr() and arm_tlb_fill()Peter Maydell
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore the FSR values they return, so we can just remove the argument entirely. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Message-id: 1512503192-2239-12-git-send-email-peter.maydell@linaro.org
2017-12-13target/arm: Use ARMMMUFaultInfo in deliver_fault()Peter Maydell
Now that ARMMMUFaultInfo is guaranteed to have enough information to construct a fault status code, we can pass it in to the deliver_fault() function and let it generate the correct type of FSR for the destination, rather than relying on the value provided by get_phys_addr(). I don't think there are any cases the old code was getting wrong, but this is more obviously correct. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Tested-by: Stefano Stabellini <sstabellini@kernel.org> Message-id: 1512503192-2239-10-git-send-email-peter.maydell@linaro.org
2017-10-31fix WFI/WFE length in syndrome registerStefano Stabellini
WFI/E are often, but not always, 4 bytes long. When they are, we need to set ARM_EL_IL_SHIFT in the syndrome register. Pass the instruction length to HELPER(wfi), use it to decrement pc appropriately and to pass an is_16bit flag to syn_wfx, which sets ARM_EL_IL_SHIFT if needed. Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn. Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Message-id: alpine.DEB.2.10.1710241055160.574@sstabellini-ThinkPad-X260 [PMM: move setting of dc->insn for Thumb so it is correct for 32 bit insns] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-24target/arm: check CF_PARALLEL instead of parallel_cpusEmilio G. Cota
Thereby decoupling the resulting translated code from the current state of the system. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-06arm: Fix SMC reporting to EL2 when QEMU provides PSCIJan Kiszka
This properly forwards SMC events to EL2 when PSCI is provided by QEMU itself and, thus, ARM_FEATURE_EL3 is off. Found and tested with the Jailhouse hypervisor. Solution based on suggestions by Peter Maydell. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-14target/arm: Clear exclusive monitor on v7M reset, exception entry/exitPeter Maydell
For M profile we must clear the exclusive monitor on reset, exception entry and exception exit. We weren't doing any of these things; fix this bug. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1505137930-13255-3-git-send-email-peter.maydell@linaro.org
2017-09-07target/arm: Implement new do_transaction_failed hookPeter Maydell
Implement the new do_transaction_failed hook for ARM, which should cause the CPU to take a prefetch abort or data abort. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1504626814-23124-4-git-send-email-peter.maydell@linaro.org
2017-09-04target/arm: Allow deliver_fault() caller to specify EA bitPeter Maydell
For external aborts, we will want to be able to specify the EA (external abort type) bit in the syndrome field. Allow callers of deliver_fault() to do that by adding a field to ARMMMUFaultInfo which we use when constructing the syndrome values. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-09-04target/arm: Factor out fault delivery codePeter Maydell
We currently have some similar code in tlb_fill() and in arm_cpu_do_unaligned_access() for delivering a data abort or prefetch abort. We're also going to want to do the same thing to handle external aborts. Factor out the common code into a new function deliver_fault(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-09-04target/arm: Don't trap WFI/WFE for M profilePeter Maydell
M profile cores can never trap on WFI or WFE instructions. Check for M profile in check_wfx_trap() to ensure this. The existing code will do the right thing for v7M cores because the hcr_el2 and scr_el3 registers will be all-zeroes and so we won't attempt to trap, but when we start setting ARM_FEATURE_V8 for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not give the right results. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1501692241-23310-3-git-send-email-peter.maydell@linaro.org
2017-06-02arm: Add support for M profile CPUs having different MMU index semanticsPeter Maydell
The M profile CPU's MPU has an awkward corner case which we would like to implement with a different MMU index. We can avoid having to bump the number of MMU modes ARM uses, because some of our existing MMU indexes are only used by non-M-profile CPUs, so we can borrow one. To avoid that getting too confusing, clean up the code to try to keep the two meanings of the index separate. Instead of ARMMMUIdx enum values being identical to core QEMU MMU index values, they are now the core index values with some high bits set. Any particular CPU always uses the same high bits (so eventually A profile cores and M profile cores will use different bits). New functions arm_to_core_mmu_idx() and core_to_arm_mmu_idx() convert between the two. In general core index values are stored in 'int' types, and ARM values are stored in ARMMMUIdx types. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
2017-06-02arm: Use the mmu_idx we're passed in arm_cpu_do_unaligned_access()Peter Maydell
When identifying the DFSR format for an alignment fault, use the mmu index that we are passed, rather than calling cpu_mmu_index() to get the mmu index for the current CPU state. This doesn't actually make any difference since the only cases where the current MMU index differs from the index used for the load are the "unprivileged load/store" instructions, and in that case the mmu index may differ but the translation regime is the same (apart from the "use from Hyp mode" case which is UNPREDICTABLE). However it's the more logical thing to do. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 1493122030-32191-2-git-send-email-peter.maydell@linaro.org
2017-04-20target/arm: Add assertion about FSC format for syndrome registersPeter Maydell
In tlb_fill() we construct a syndrome register value from a fault status register value which is filled in by arm_tlb_fill(). arm_tlb_fill() returns FSR values which might be in the format used with short-format page descriptors, or the format used with long-format (LPAE) descriptors. The syndrome register always uses LPAE-format FSR status codes. It isn't actually possible to end up delivering a syndrome register value to the guest for a fault which is reported with a short-format FSR (that kind of stage 1 fault will only happen for an AArch32 translation regime which doesn't have a syndrome register, and can never be redirected to an AArch64 or Hyp exception level). Add an assertion which checks this, and adjust the code so that we construct a syndrome with an invalid status code, rather than allowing set bits in the FSR input to randomly corrupt other fields in the syndrome. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1491486152-24304-1-git-send-email-peter.maydell@linaro.org
2017-02-24target-arm: don't generate WFE/YIELD calls for MTTCGAlex Bennée
The WFE and YIELD instructions are really only hints and in TCG's case they were useful to move the scheduling on from one vCPU to the next. In the parallel context (MTTCG) this just causes an unnecessary cpu_exit and contention of the BQL. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24tcg: drop global lock during TCG code executionJan Kiszka
This finally allows TCG to benefit from the iothread introduction: Drop the global mutex while running pure TCG CPU code. Reacquire the lock when entering MMIO or PIO emulation, or when leaving the TCG loop. We have to revert a few optimization for the current TCG threading model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not kicking it in qemu_cpu_kick. We also need to disable RAM block reordering until we have a more efficient locking mechanism at hand. Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here. These numbers demonstrate where we gain something: 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm The guest CPU was fully loaded, but the iothread could still run mostly independent on a second core. Without the patch we don't get beyond 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm We don't benefit significantly, though, when the guest is not fully loading a host CPU. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex] Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [EGC: fixed iothread lock for cpu-exec IRQ handling] Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: -smp single-threaded fix, clean commit msg, BQL fixes] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com> [PM: target-arm changes] Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-07arm: Correctly handle watchpoints for BE32 CPUsJulian Brown
In BE32 mode, sub-word size watchpoints can fail to trigger because the address of the access is adjusted in the opcode helpers before being compared with the watchpoint registers. This patch reverses the address adjustment before performing the comparison with the help of a new CPUClass hook. This version of the patch augments and tidies up comments a little. Signed-off-by: Julian Brown <julian@codesourcery.com> Message-id: caaf64ffc72f6ae183015337b7afdbd4b8989cb6.1484929304.git.julian@codesourcery.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-12-27target-arm: Log AArch64 exception returnsPeter Maydell
We already log exception entry; add logging of the AArch64 exception return path as well. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2016-12-20Move target-* CPU file into a target/ folderThomas Huth
We've currently got 18 architectures in QEMU, and thus 18 target-xxx folders in the root folder of the QEMU source tree. More architectures (e.g. RISC-V, AVR) are likely to be included soon, too, so the main folder of the QEMU sources slowly gets quite overcrowded with the target-xxx folders. To disburden the main folder a little bit, let's move the target-xxx folders into a dedicated target/ folder, so that target-xxx/ simply becomes target/xxx/ instead. Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part] Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part] Acked-by: Michael Walle <michael@walle.cc> [lm32 part] Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part] Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part] Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part] Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part] Acked-by: Richard Henderson <rth@twiddle.net> [alpha part] Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part] Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part] Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part] Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part] Signed-off-by: Thomas Huth <thuth@redhat.com>