aboutsummaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)Author
2019-09-03target/arm: Don't abort on M-profile exception return in linux-user modePeter Maydell
An attempt to do an exception-return (branch to one of the magic addresses) in linux-user mode for M-profile should behave like a normal branch, because linux-user mode is always going to be in 'handler' mode. This used to work, but we broke it when we added support for the M-profile security extension in commit d02a8698d7ae2bfed. In that commit we allowed even handler-mode calls to magic return values to be checked for and dealt with by causing an EXCP_EXCEPTION_EXIT exception to be taken, because this is needed for the FNC_RETURN return-from-non-secure-function-call handling. For system mode we added a check in do_v7m_exception_exit() to make any spurious calls from Handler mode behave correctly, but forgot that linux-user mode would also be affected. How an attempted return-from-non-secure-function-call in linux-user mode should be handled is not clear -- on real hardware it would result in return to secure code (not to the Linux kernel) which could then handle the error in any way it chose. For QEMU we take the simple approach of treating this erroneous return the same way it would be handled on a CPU without the security extensions -- treat it as a normal branch. The upshot of all this is that for linux-user mode we should never do any of the bx_excret magic, so the code change is simple. This ought to be a weird corner case that only affects broken guest code (because Linux user processes should never be attempting to do exception returns or NS function returns), except that the code that assigns addresses in RAM for the process and stack in our linux-user code does not attempt to avoid this magic address range, so legitimate code attempting to return to a trampoline routine on the stack can fall into this case. This change fixes those programs, but we should also look at restricting the range of memory we use for M-profile linux-user guests to the area that would be real RAM in hardware. Cc: qemu-stable@nongnu.org Reported-by: Christophe Lyon <christophe.lyon@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190822131534.16602-1-peter.maydell@linaro.org Fixes: https://bugs.launchpad.net/qemu/+bug/1840922 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Free TCG temps in trans_VMOV_64_sp()Peter Maydell
The function neon_store_reg32() doesn't free the TCG temp that it is passed, so the caller must do that. We got this right in most places but forgot to free the TCG temps in trans_VMOV_64_sp(). Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190827121931.26836-1-peter.maydell@linaro.org
2019-09-03target/arm: Fix SMMLS argument orderRichard Henderson
The previous simplification got the order of operands to the subtraction wrong. Since the 64-bit product is the subtrahend, we must use a 64-bit subtract to properly compute the borrow from the low-part of the product. Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR") Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190829013258.16102-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03target/arm: Take exceptions on ATS instructions when neededPeter Maydell
The translation table walk for an ATS instruction can result in various faults. In general these are just reported back via the PAR_EL1 fault status fields, but in some cases the architecture requires that the fault is turned into an exception: * synchronous stage 2 faults of any kind during AT S1E0* and AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3 * synchronous external aborts are taken as Data Abort exceptions (This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and G5.13.4.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190816125802.25877-3-peter.maydell@linaro.org
2019-09-03target/arm: Allow ARMCPRegInfo read/write functions to throw exceptionsPeter Maydell
Currently the only part of an ARMCPRegInfo which is allowed to cause a CPU exception is the access function, which returns a value indicating that some flavour of UNDEF should be generated. For the ATS system instructions, we would like to conditionally generate exceptions as part of the writefn, because some faults during the page table walk (like external aborts) should cause an exception to be raised rather than returning a value. There are several ways we could do this: * plumb the GETPC() value from the top level set_cp_reg/get_cp_reg helper functions through into the readfn and writefn hooks * add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC() value * require the ATS instructions to provide a dummy accessfn, which serves no purpose except to cause the code generation to emit TCG ops to sync the CPU state * add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly throwing an exception in its read/write hooks, and make the codegen sync the CPU state before calling the hooks if the flag is set This patch opts for the last of these, as it is fairly simple to implement and doesn't require invasive changes like updating the readfn/writefn hook function prototype signature. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 20190816125802.25877-2-peter.maydell@linaro.org
2019-09-03target/arm: Factor out unallocated_encoding for aarch32Richard Henderson
Make this a static function private to translate.c. Thus we can use the same idiom between aarch64 and aarch32 without actually sharing function implementations. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190826151536.6771-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03Revert "target/arm: Use unallocated_encoding for aarch32"Richard Henderson
This reverts commit 3cb36637157088892e9e33ddb1034bffd1251d3b. Despite the fact that the text for the call to gen_exception_insn is identical for aarch64 and aarch32, the implementation inside gen_exception_insn is totally different. This fixes exceptions raised from aarch64. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Message-id: 20190826151536.6771-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-22Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-08-21' ↵Peter Maydell
into staging Monitor patches for 2019-08-21 # gpg: Signature made Wed 21 Aug 2019 16:35:07 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-monitor-2019-08-21: monitor/qmp: Update comment for commit 4eaca8de268 qdev: Collect HMP handlers command handlers in qdev-monitor.c qapi: Move query-target from misc.json to machine.json hw/core: Move cpu.c, cpu.h from qom/ to hw/core/ Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-21hw/core: Move cpu.c, cpu.h from qom/ to hw/core/Markus Armbruster
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
2019-08-20icount: remove unnecessary gen_io_end callsPavel Dovgalyuk
Prior patch resets can_do_io flag at the TB entry. Therefore there is no need in resetting this flag at the end of the block. This patch removes redundant gen_io_end calls. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <156404429499.18669.13404064982854123855.stgit@pasha-Precision-3630-Tower> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
2019-08-16Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20190816' into staging target-arm queue: * target/arm: generate a custom MIDR for -cpu max * hw/misc/zynq_slcr: refactor to use standard register definition * Set ENET_BD_BDU in I.MX FEC controller * target/arm: Fix routing of singlestep exceptions * refactor a32/t32 decoder handling of PC * minor optimisations/cleanups of some a32/t32 codegen * target/arm/cpu64: Ensure kvm really supports aarch64=off * target/arm/cpu: Ensure we can use the pmu with kvm * target/arm: Minor cleanups preparatory to KVM SVE support # gpg: Signature made Fri 16 Aug 2019 14:15:55 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20190816: (29 commits) target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR target/arm: Use tcg_gen_rotri_i32 for gen_swap_half target/arm: Use ror32 instead of open-coding the operation target/arm: Remove redundant shift tests target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB target/arm: Use tcg_gen_extract_i32 for shifter_out_im target/arm/kvm64: Move the get/put of fpsimd registers out target/arm/kvm64: Fix error returns target/arm/cpu: Use div-round-up to determine predicate register array size target/arm/helper: zcr: Add build bug next to value range assumption target/arm/cpu: Ensure we can use the pmu with kvm target/arm/cpu64: Ensure kvm really supports aarch64=off target/arm: Remove helper_double_saturate target/arm: Use unallocated_encoding for aarch32 target/arm: Remove offset argument to gen_exception_bkpt_insn target/arm: Replace offset with pc in gen_exception_internal_insn target/arm: Replace offset with pc in gen_exception_insn target/arm: Replace s->pc with s->base.pc_next target/arm: Remove redundant s->pc & ~1 ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use tcg_gen_extrh_i64_i32 to extract the high wordRichard Henderson
Separate shift + extract low will result in one extra insn for hosts like RISC-V, MIPS, and Sparc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-8-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSRRichard Henderson
All of the inputs to these instructions are 32-bits. Rather than extend each input to 64-bits and then extract the high 32-bits of the output, use tcg_gen_muls2_i32 and other 32-bit generator functions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-7-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use tcg_gen_rotri_i32 for gen_swap_halfRichard Henderson
Rotate is the more compact and obvious way to swap 16-bit elements of a 32-bit word. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use ror32 instead of open-coding the operationRichard Henderson
The helper function is more documentary, and also already handles the case of rotate by zero. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Remove redundant shift testsRichard Henderson
The immediate shift generator functions already test for, and eliminate, the case of a shift by zero. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTBRichard Henderson
Use deposit as the composit operation to merge the bits from the two inputs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use tcg_gen_extract_i32 for shifter_out_imRichard Henderson
Extract is a compact combination of shift + and. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190808202616.13782-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/kvm64: Move the get/put of fpsimd registers outAndrew Jones
Move the getting/putting of the fpsimd registers out of kvm_arch_get/put_registers() into their own helper functions to prepare for alternatively getting/putting SVE registers. No functional change. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/kvm64: Fix error returnsAndrew Jones
A couple return -EINVAL's forgot their '-'s. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/cpu: Use div-round-up to determine predicate register array sizeAndrew Jones
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of four, then we should use DIV_ROUND_UP to ensure we get an appropriate array size. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/helper: zcr: Add build bug next to value range assumptionAndrew Jones
The current implementation of ZCR_ELx matches the architecture, only implementing the lower four bits, with the rest RAZ/WI. This puts a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ grow without a corresponding update here. Suggested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/cpu: Ensure we can use the pmu with kvmAndrew Jones
We first convert the pmu property from a static property to one with its own accessors. Then we use the set accessor to check if the PMU is supported when using KVM. Indeed a 32-bit KVM host does not support the PMU, so this check will catch an attempt to use it at property-set time. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm/cpu64: Ensure kvm really supports aarch64=offAndrew Jones
If -cpu <cpu>,aarch64=off is used then KVM must also be used, and it and the host must support running the vcpu in 32-bit mode. Also, if -cpu <cpu>,aarch64=on is used, then it doesn't matter if kvm is enabled or not. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Remove helper_double_saturateRichard Henderson
Replace x = double_saturate(y) with x = add_saturate(y, y). There is no need for a separate more specialized helper. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Use unallocated_encoding for aarch32Richard Henderson
Promote this function from aarch64 to fully general use. Use it to unify the code sequences for generating illegal opcode exceptions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Remove offset argument to gen_exception_bkpt_insnRichard Henderson
Unlike the other more generic gen_exception{,_internal}_insn interfaces, breakpoints always refer to the current instruction. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Replace offset with pc in gen_exception_internal_insnRichard Henderson
The offset is variable depending on the instruction set. Passing in the actual value is clearer in intent. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Replace offset with pc in gen_exception_insnRichard Henderson
The offset is variable depending on the instruction set, whereas we have stored values for the current pc and the next pc. Passing in the actual value is clearer in intent. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Replace s->pc with s->base.pc_nextRichard Henderson
We must update s->base.pc_next when we return from the translate_insn hook to the main translator loop. By incrementing s->base.pc_next immediately after reading the insn word, "pc_next" contains the address of the next instruction throughout translation. All remaining uses of s->pc are referencing the address of the next insn, so this is now a simple global replacement. Remove the "s->pc" field. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Remove redundant s->pc & ~1Richard Henderson
The thumb bit has already been removed from s->pc, and is always even. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Introduce add_reg_for_litRichard Henderson
Provide a common routine for the places that require ALIGN(PC, 4) as the base address as opposed to plain PC. The two are always the same for A32, but the difference is meaningful for thumb mode. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Introduce read_pcRichard Henderson
We currently have 3 different ways of computing the architectural value of "PC" as seen in the ARM ARM. The value of s->pc has been incremented past the current insn, but that is all. Thus for a32, PC = s->pc + 4; for t32, PC = s->pc; for t16, PC = s->pc + 2. These differing computations make it impossible at present to unify the various code paths. With the newly introduced s->pc_curr, we can compute the correct value for all cases, using the formula given in the ARM ARM. This changes the behaviour for load_reg() and load_reg_var() when called with reg==15 from a 32-bit Thumb instruction: previously they would have returned the incorrect value of pc_curr + 6, and now they will return the architecturally correct value of PC, which is pc_curr + 4. This will not affect well-behaved guest software, because all of the places we call these functions from T32 code are instructions where using r15 is UNPREDICTABLE. Using the architectural PC value here is more consistent with the T16 and A32 behaviour. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-4-richard.henderson@linaro.org [PMM: added commit message note about UNPREDICTABLE T32 cases] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Introduce pc_currRichard Henderson
Add a new field to retain the address of the instruction currently being translated. The 32-bit uses are all within subroutines used by a32 and t32. This will become less obvious when t16 support is merged with a32+t32, and having a clear definition will help. Convert aarch64 as well for consistency. Note that there is one instance of a pre-assert fprintf that used the wrong value for the address of the current instruction. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Pass in pc to thumb_insn_is_16bitRichard Henderson
This function is used in two different contexts, and it will be clearer if the function is given the address to which it applies. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190807045335.1361-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16target/arm: Fix routing of singlestep exceptionsPeter Maydell
When generating an architectural single-step exception we were routing it to the "default exception level", which is to say the same exception level we execute at except that EL0 exceptions go to EL1. This is incorrect because the debug exception level can be configured by the guest for situations such as single stepping of EL0 and EL1 code by EL2. We have to track the target debug exception level in the TB flags, because it is dependent on CPU state like HCR_EL2.TGE and MDCR_EL2.TDE. (That we were previously calling the arm_debug_target_el() function to determine dc->ss_same_el is itself a bug, though one that would only have manifested as incorrect syndrome information.) Since we are out of TB flag bits unless we want to expand into the cs_base field, we share some bits with the M-profile only HANDLER and STACKCHECK bits, since only A-profile has this singlestep. Fixes: https://bugs.launchpad.net/qemu/+bug/1838913 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
2019-08-16target/arm: Factor out 'generate singlestep exception' functionPeter Maydell
Factor out code to 'generate a singlestep exception', which is currently repeated in four places. To do this we need to also pull the identical copies of the gen-exception() function out of translate-a64.c and translate.c into translate.h. (There is a bug in the code: we're taking the exception to the wrong target EL. This will be simpler to fix if there's only one place to do it.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20190805130952.4415-2-peter.maydell@linaro.org
2019-08-16target/arm: generate a custom MIDR for -cpu maxAlex Bennée
While most features are now detected by probing the ID_* registers kernels can (and do) use MIDR_EL1 for working out of they have to apply errata. This can trip up warnings in the kernel as it tries to work out if it should apply workarounds to features that don't actually exist in the reported CPU type. Avoid this problem by synthesising our own MIDR value. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190726113950.7499-1-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16sysemu: Split sysemu/runstate.h off sysemu/sysemu.hMarkus Armbruster
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence: * It's included widely: in my "build everything" tree, changing sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h, down from 5400 due to the previous two commits). * It pulls in more than a dozen additional headers. Split stuff related to run state management into its own header sysemu/runstate.h. Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400 to 4200. Touching new sysemu/runstate.h recompiles some 500 objects. Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also add qemu/main-loop.h. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-30-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [Unbreak OS-X build]
2019-08-16Clean up inclusion of sysemu/sysemu.hMarkus Armbruster
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Almost a third of its inclusions are actually superfluous. Delete them. Downgrade two more to qapi/qapi-types-run-state.h, and move one from char/serial.h to char/serial.c. hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and stubs/semihost.c define variables declared in sysemu/sysemu.h without including it. The compiler is cool with that, but include it anyway. This doesn't reduce actual use much, as it's still included into widely included headers. The next commit will tackle that. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-27-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2019-08-16Include hw/boards.h a bit lessMarkus Armbruster
hw/boards.h pulls in almost 60 headers. The less we include it into headers, the better. As a first step, drop superfluous inclusions, and downgrade some more to what's actually needed. Gets rid of just one inclusion into a header. Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-23-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
2019-08-16Include qemu/main-loop.h lessMarkus Armbruster
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16Include hw/hw.h exactly where neededMarkus Armbruster
In my "build everything" tree, changing hw/hw.h triggers a recompile of some 2600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). The previous commits have left only the declaration of hw_error() in hw/hw.h. This permits dropping most of its inclusions. Touching it now recompiles less than 200 objects. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-19-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16migration: Move the VMStateDescription typedef to typedefs.hMarkus Armbruster
We declare incomplete struct VMStateDescription in a couple of places so we don't have to include migration/vmstate.h for the typedef. That's fine with me. However, the next commit will drop migration/vmstate.h from a massive number of compiles. Move the typedef to qemu/typedefs.h now, so I don't have to insert struct in front of VMStateDescription all over the place then. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-15-armbru@redhat.com>
2019-08-16Include hw/irq.h a lot lessMarkus Armbruster
In my "build everything" tree, changing hw/irq.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). hw/hw.h supposedly includes it for convenience. Several other headers include it just to get qemu_irq and.or qemu_irq_handler. Move the qemu_irq and qemu_irq_handler typedefs from hw/irq.h to qemu/typedefs.h, and then include hw/irq.h only where it's still needed. Touching it now recompiles only some 500 objects. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-13-armbru@redhat.com>
2019-08-02target/arm: Avoid bogus NSACR traps on M-profile without Security ExtensionPeter Maydell
In Arm v8.0 M-profile CPUs without the Security Extension and also in v7M CPUs, there is no NSACR register. However, the code we have to handle the FPU does not always check whether the ARM_FEATURE_M_SECURITY bit is set before testing whether env->v7m.nsacr permits access to the FPU. This means that for a CPU with an FPU but without the Security Extension we would always take a bogus fault when trying to stack the FPU registers on an exception entry. We could fix this by adding extra feature bit checks for all uses, but it is simpler to just make the internal value of nsacr 0xcff ("all non-secure accesses allowed"), since this is not guest visible when the Security Extension is not present. This allows us to continue to follow the Arm ARM pseudocode which takes a similar approach. (In particular, in the v8.1 Arm ARM the register is documented as reading as 0xcff in this configuration.) Fixes: https://bugs.launchpad.net/qemu/+bug/1838475 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Damien Hedde <damien.hedde@greensocs.com> Message-id: 20190801105742.20036-1-peter.maydell@linaro.org
2019-07-30target/arm: Deliver BKPT/BRK exceptions to correct exception levelPeter Maydell
Most Arm architectural debug exceptions (eg watchpoints) are ignored if the configured "debug exception level" is below the current exception level (so for example EL1 can't arrange to get debug exceptions for EL2 execution). Exceptions generated by the BRK or BPKT instructions are a special case -- they must always cause an exception, so if we're executing above the debug exception level then we must take them to the current exception level. This fixes a bug where executing BRK at EL2 could result in an exception being taken at EL1 (which is strictly forbidden by the architecture). Fixes: https://bugs.launchpad.net/qemu/+bug/1838277 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190730132522.27086-1-peter.maydell@linaro.org
2019-07-22target/arm: Limit ID register assertions to TCGPeter Maydell
In arm_cpu_realizefn() we make several assertions about the values of guest ID registers: * if the CPU provides AArch32 v7VE or better it must advertise the ARM_DIV feature * if the CPU provides AArch32 A-profile v6 or better it must advertise the Jazelle feature These are essentially consistency checks that our ID register specifications in cpu.c didn't accidentally miss out a feature, because increasingly the TCG emulation gates features on the values in ID registers rather than using old-style checks of ARM_FEATURE_FOO bits. Unfortunately, these asserts can cause problems if we're running KVM, because in that case we don't control the values of the ID registers -- we read them from the host kernel. In particular, if the host kernel is older than 4.15 then it doesn't expose the ID registers via the KVM_GET_ONE_REG ioctl, and we set up dummy values for some registers and leave the rest at zero. (See the comment in target/arm/kvm64.c kvm_arm_get_host_cpu_features().) This set of dummy values is not sufficient to pass our assertions, and so on those kernels running an AArch32 guest on AArch64 will assert. We could provide a more sophisticated set of dummy ID registers in this case, but that still leaves the possibility of a host CPU which reports bogus ID register values that would cause us to assert. It's more robust to only do these ID register checks if we're using TCG, as that is the only case where this is truly a QEMU code bug. Reported-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190718125928.20147-1-peter.maydell@linaro.org Fixes: https://bugs.launchpad.net/qemu/+bug/1830864 Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-22target/arm: Add missing break statement for Hypervisor Trap ExceptionPhilippe Mathieu-Daudé
Reported by GCC9 when building with -Wimplicit-fallthrough=2: target/arm/helper.c: In function ‘arm_cpu_do_interrupt_aarch32_hyp’: target/arm/helper.c:7958:14: error: this statement may fall through [-Werror=implicit-fallthrough=] 7958 | addr = 0x14; | ~~~~~^~~~~~ target/arm/helper.c:7959:5: note: here 7959 | default: | ^~~~~~~ cc1: all warnings being treated as errors Fixes: b9bc21ff9f9 Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reported-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190719111451.12406-1-philmd@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-15target/arm: NS BusFault on vector table fetch escalates to NS HardFaultPeter Maydell
In the M-profile architecture, when we do a vector table fetch and it fails, we need to report a HardFault. Whether this is a Secure HF or a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0 then HF is always Secure, because there is no NonSecure HardFault. Otherwise, the answer depends on whether the 'underlying exception' (MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In the pseudocode, this is handled in the Vector() function: the final exc.isSecure is calculated by looking at the exc.isSecure from the exception returned from the memory access, not the isSecure input argument.) We weren't doing this correctly, because we were looking at the target security domain of the exception we were trying to load the vector table entry for. This produces errors of two kinds: * a load from the NS vector table which hits the "NS access to S memory" SecureFault should end up as a Secure HardFault, but we were raising an NS HardFault * a load from the S vector table which causes a BusFault should raise an NS HardFault if BFHFNMINS == 1 (because in that case all BusFaults are NonSecure), but we were raising a Secure HardFault Correct the logic. We also fix a comment error where we claimed that we might be escalating MemManage to HardFault, and forgot about SecureFault. (Vector loads can never hit MPU access faults, because they're always aligned and always use the default address map.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190705094823.28905-1-peter.maydell@linaro.org