aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2019-10-04target/ppc: remove unnecessary if() around calls to set_dfp{64,128}() in DFP ↵Mark Cave-Ayland
macros Now that the parameters to both set_dfp64() and set_dfp128() are exactly the same, there is no need for an explicit if() statement to determine which function should be called based upon size. Instead we can simply use the preprocessor to generate the call to set_dfp##size() directly. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-8-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: use existing VsrD() macro to eliminate HI_IDX and LO_IDX from ↵Mark Cave-Ayland
dfp_helper.c Switch over all accesses to the decimal numbers held in struct PPC_DFP from using HI_IDX and LO_IDX to using the VsrD() macro instead. Not only does this allow the compiler to ensure that the various dfp_* functions are being passed a ppc_vsr_t rather than an arbitrary uint64_t pointer, but also allows the host endian-specific HI_IDX and LO_IDX to be completely removed from dfp_helper.c. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-7-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: change struct PPC_DFP decimal storage from uint64[2] to ppc_vsr_tMark Cave-Ayland
There are several places in dfp_helper.c that access the decimal number representations in struct PPC_DFP via HI_IDX and LO_IDX defines which are set at the top of dfp_helper.c according to the host endian. However we can instead switch to using ppc_vsr_t for decimal numbers and then make subsequent use of the existing VsrD() macros to access the correct element regardless of host endian. Note that 64-bit decimals are stored in the LSB of ppc_vsr_t (equivalent to VsrD(1)). Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-6-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: introduce dfp_finalize_decimal{64,128}() helper functionsMark Cave-Ayland
Most of the DFP helper functions call decimal{64,128}FromNumber() just before returning in order to convert the decNumber stored in dfp.t64 back to a Decimal{64,128} to write back to the FP registers. Introduce new dfp_finalize_decimal{64,128}() helper functions which both enable the parameter list to be reduced considerably, and also help minimise the changes required in the next patch. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-5-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: update {get,set}_dfp{64,128}() helper functions to read/write ↵Mark Cave-Ayland
DFP numbers correctly Since commit ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array" FP registers are no longer stored consecutively in memory and so the current method of combining FP register pairs into DFP numbers is incorrect. Firstly update the definition of the dh_*_fprp defines in helper.h to reflect that FP registers are now stored as part of an array of ppc_vsr_t elements rather than plain uint64_t elements, and then introduce a new ppc_fprp_t type which conceptually represents a DFP even-odd register pair to be consumed by the DFP helper functions. Finally update the new DFP {get,set}_dfp{64,128}() helper functions to convert between DFP numbers and DFP even-odd register pairs correctly, making use of the existing VsrD() macro to access the correct elements regardless of host endian. Fixes: ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array" Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-4-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: introduce set_dfp{64,128}() helper functionsMark Cave-Ayland
The existing functions (now incorrectly) assume that the MSB and LSB of DFP numbers are stored as consecutive 64-bit words in memory. Instead of accessing the DFP numbers directly, introduce set_dfp{64,128}() helper functions to ease the switch to the correct representation. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04target/ppc: introduce get_dfp{64,128}() helper functionsMark Cave-Ayland
The existing functions (now incorrectly) assume that the MSB and LSB of DFP numbers are stored as consecutive 64-bit words in memory. Instead of accessing the DFP numbers directly, introduce get_dfp{64,128}() helper functions to ease the switch to the correct representation. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04ppc/kvm: Skip writing DPDES back when in run time stateAlexey Kardashevskiy
On POWER8 systems the Directed Privileged Door-bell Exception State register (DPDES) stores doorbell pending status, one bit per a thread of a core, set by "msgsndp" instruction. The register is shared among threads of the same core and KVM on POWER9 emulates it in a similar way (POWER9 does not have DPDES). DPDES is shared but QEMU assumes all SPRs are per thread so the only safe way to write DPDES back to VCPU before running a guest is doing so while all threads are pulled out of the guest so DPDES cannot change. There is only one situation when this condition is met: incoming migration when all threads are stopped. Otherwise any QEMU HMP/QMP command causing kvm_arch_put_registers() (for example printing registers or dumping memory) can clobber DPDES in a race with other vcpu threads. This changes DPDES handling so it is not written to KVM at runtime. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20190923084110.34643-1-aik@ozlabs.ru> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04ppc: Use FPSCR defines instead of constantsPaul A. Clarke
There are FPSCR-related defines in target/ppc/cpu.h which can be used in place of constants and explicit shifts which arguably improve the code a bit in places. Signed-off-by: Paul A. Clarke <pc@us.ibm.com> Message-Id: <1568817169-1721-1-git-send-email-pc@us.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04ppc: Add support for 'mffsce' instructionPaul A. Clarke
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR) instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl. This patch adds support for 'mffsce' instruction. 'mffsce' is identical to 'mffs', except that it also clears the exception enable bits in the FPSCR. On CPUs without support for 'mffsce' (below ISA 3.0), the instruction will execute identically to 'mffs'. Signed-off-by: Paul A. Clarke <pc@us.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1568817082-1384-1-git-send-email-pc@us.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-04ppc: Add support for 'mffscrn','mffscrni' instructionsPaul A. Clarke
ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR) instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl. This patch adds support for 'mffscrn' and 'mffscrni' instructions. 'mffscrn' and 'mffscrni' are similar to 'mffsl', except they do not return the status bits (FI, FR, FPRF) and they also set the rounding mode in the FPSCR. On CPUs without support for 'mffscrn'/'mffscrni' (below ISA 3.0), the instructions will execute identically to 'mffs'. Signed-off-by: Paul A. Clarke <pc@us.ibm.com> Message-Id: <1568817081-1345-1-git-send-email-pc@us.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-10-01target/mips: msa: Move helpers for <AND|NOR|OR|XOR>.VAleksandar Markovic
Cosmetic reorganization. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-21-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Simplify and move helper for MOVE.VAleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-20-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for MOD_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-19-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for DIV_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-18-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for CLT_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-17-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for CLE_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-16-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for CEQ.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-15-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for AVER_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-14-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for AVE_<S|U>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-13-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for B<CLR|NEG|SEL>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-12-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Unroll loops and demacro <BMNZ|BMZ|BSEL>.VAleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-11-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for BINS<L|R>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-10-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for PCNT.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-9-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: msa: Split helpers for <NLOC|NLZC>.<B|H|W|D>Aleksandar Markovic
Achieves clearer code and slightly better performance. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Message-Id: <1569415572-19635-8-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: Clean up translate.cAleksandar Markovic
Mostly fix errors and warnings reported by 'checkpatch.pl -f'. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <1569331602-2586-7-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: Clean up mips-defs.hAleksandar Markovic
Mostly fix errors and warnings reported by 'checkpatch.pl -f'. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <1569331602-2586-5-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: Clean up kvm_mips.hAleksandar Markovic
Mostly fix errors and warnings reported by 'checkpatch.pl -f'. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <1569331602-2586-4-git-send-email-aleksandar.markovic@rt-rk.com>
2019-10-01target/mips: Clean up internal.hAleksandar Markovic
Mostly fix errors and warnings reported by 'checkpatch.pl -f'. Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <1569331602-2586-3-git-send-email-aleksandar.markovic@rt-rk.com>
2019-09-30s390/kvm: split kvm mem slots at 4TBChristian Borntraeger
Instead of splitting at an unaligned address, we can simply split at 4TB. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Igor Mammedov <imammedo@redhat.com>
2019-09-30s390: do not call memory_region_allocate_system_memory() multiple timesIgor Mammedov
s390 was trying to solve limited KVM memslot size issue by abusing memory_region_allocate_system_memory(), which breaks API contract where the function might be called only once. Beside an invalid use of API, the approach also introduced migration issue, since RAM chunks for each KVM_SLOT_MAX_BYTES are transferred in migration stream as separate RAMBlocks. After discussion [1], it was agreed to break migration from older QEMU for guest with RAM >8Tb (as it was relatively new (since 2.12) and considered to be not actually used downstream). Migration should keep working for guests with less than 8TB and for more than 8TB with QEMU 4.2 and newer binary. In case user tries to migrate more than 8TB guest, between incompatible QEMU versions, migration should fail gracefully due to non-exiting RAMBlock ID or RAMBlock size mismatch. Taking in account above and that now KVM code is able to split too big MemorySection into several memslots, partially revert commit (bb223055b s390-ccw-virtio: allow for systems larger that 7.999TB) and use kvm_set_max_memslot_size() to set KVMSlot size to KVM_SLOT_MAX_BYTES. 1) [PATCH RFC v2 4/4] s390: do not call memory_region_allocate_system_memory() multiple times Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <20190924144751.24149-5-imammedo@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-09-30Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20190927' into staging target-arm queue: * Fix the CBAR register implementation for Cortex-A53, Cortex-A57, Cortex-A72 * Fix direct booting of Linux kernels on emulated CPUs which have an AArch32 EL3 (incorrect NSACR settings meant they could not access the FPU) * semihosting cleanup: do more work at translate time and less work at runtime # gpg: Signature made Fri 27 Sep 2019 15:32:43 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20190927: hw/arm/boot: Use the IEC binary prefix definitions hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots tests/tcg: add linux-user semihosting smoke test for ARM target/arm: remove run-time semihosting checks for linux-user target/arm: remove run time semihosting checks target/arm: handle A-profile semihosting at translate time target/arm: handle M-profile semihosting at translate time tests/tcg: clean-up some comments after the de-tangling target/arm: fix CBAR register for AArch64 CPUs Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # tests/tcg/arm/Makefile.target
2019-09-27target/arm: remove run time semihosting checksAlex Bennée
Now we do all our checking and use a common EXCP_SEMIHOST for semihosting operations we can make helper code a lot simpler. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190913151845.12582-5-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: handle A-profile semihosting at translate timeAlex Bennée
As for the other semihosting calls we can resolve this at translate time. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20190913151845.12582-4-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: handle M-profile semihosting at translate timeAlex Bennée
We do this for other semihosting calls so we might as well do it for M-profile as well. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20190913151845.12582-3-alex.bennee@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27target/arm: fix CBAR register for AArch64 CPUsLuc Michel
For AArch64 CPUs with a CBAR register, we have two views for it: - in AArch64 state, the CBAR_EL1 register (S3_1_C15_C3_0), returns the full 64 bits CBAR value - in AArch32 state, the CBAR register (cp15, opc1=1, CRn=15, CRm=3, opc2=0) returns a 32 bits view such that: CBAR = CBAR_EL1[31:18] 0..0 CBAR_EL1[43:32] This commit fixes the current implementation where: - CBAR_EL1 was returning the 32 bits view instead of the full 64 bits value, - CBAR was returning a truncated 32 bits version of the full 64 bits one, instead of the 32 bits view - CBAR was declared as cp15, opc1=4, CRn=15, CRm=0, opc2=0, which is the CBAR register found in the ARMv7 Cortex-Ax CPUs, but not in ARMv8 CPUs. Signed-off-by: Luc Michel <luc.michel@greensocs.com> Message-id: 20190912110103.1417887-1-luc.michel@greensocs.com [PMM: Added a comment about the two different kinds of CBAR] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-26target/i386: Fix broken build with WHPX enabledPhilippe Mathieu-Daudé
The WHPX build is broken since commit 12e9493df92 which removed the "hw/boards.h" where MachineState is declared: $ ./configure \ --enable-hax --enable-whpx $ make x86_64-softmmu/all [...] CC x86_64-softmmu/target/i386/whpx-all.o target/i386/whpx-all.c: In function 'whpx_accel_init': target/i386/whpx-all.c:1378:25: error: dereferencing pointer to incomplete type 'MachineState' {aka 'struct MachineState'} whpx->mem_quota = ms->ram_size; ^~ make[1]: *** [rules.mak:69: target/i386/whpx-all.o] Error 1 CC x86_64-softmmu/trace/generated-helpers.o make[1]: Target 'all' not remade because of errors. make: *** [Makefile:471: x86_64-softmmu/all] Error 2 Restore this header, partially reverting commit 12e9493df92. Fixes: 12e9493df92 Reported-by: Ilias Maratos <i.maratos@gmail.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190920113329.16787-2-philmd@redhat.com>
2019-09-26target/alpha: Tidy helper_fp_exc_raise_sRichard Henderson
Remove a redundant masking of ignore. Once that's gone it is obvious that the system-mode inner test is redundant with the outer test. Move the fpcr_exc_enable masking up and tidy. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-8-richard.henderson@linaro.org>
2019-09-26target/alpha: Mask IOV exception with INV for user-onlyRichard Henderson
The kernel masks the integer overflow exception with the software invalid exception mask. Include IOV in the set of exception bits masked by fpcr_exc_enable. Fixes the new float_convs test. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-7-richard.henderson@linaro.org>
2019-09-26target/alpha: Write to fpcr_flush_to_zero onceRichard Henderson
Tidy the computation of the value; no functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-6-richard.henderson@linaro.org>
2019-09-26target/alpha: Handle SWCR_MAP_DMZ earlierRichard Henderson
Since we're converting the swcr to fpcr format for exceptions, it's trivial to add FPCR_DNZ to the set of fpcr bits overriden. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-5-richard.henderson@linaro.org>
2019-09-26target/alpha: Fix SWCR_TRAP_ENABLE_MASKRichard Henderson
The CONFIG_USER_ONLY adjustment blindly mashed the swcr exception enable bits into the fpcr exception disable bits. However, fpcr_exc_enable has already converted the exception disable bits into the exception status bits in order to make it easier to mask status bits at runtime. Instead, merge the swcr enable bits with the fpcr before we convert to status bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-4-richard.henderson@linaro.org>
2019-09-26target/alpha: Fix SWCR_MAP_UMZRichard Henderson
We were setting the wrong bit. The fp_status.flush_to_zero setting is overwritten by either the constant 1 or the value of fpcr_flush_to_zero depending on bits within an fp insn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190921043256.4575-3-richard.henderson@linaro.org>
2019-09-26target/alpha: Use array for FPCR_DYN conversionRichard Henderson
This is a bit more straight-forward than using a switch statement. No functional change. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190921043256.4575-2-richard.henderson@linaro.org>
2019-09-23Merge remote-tracking branch ↵Peter Maydell
'remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23' into staging Fix a bunch of BUGs in the mem-helpers (including the MVC instruction), especially, to make them behave correctly on faults. # gpg: Signature made Mon 23 Sep 2019 09:01:21 BST # gpg: using RSA key 1BD9CAAD735C4C3A460DFCCA4DDE10F700FF835A # gpg: issuer "david@redhat.com" # gpg: Good signature from "David Hildenbrand <david@redhat.com>" [unknown] # gpg: aka "David Hildenbrand <davidhildenbrand@gmail.com>" [full] # Primary key fingerprint: 1BD9 CAAD 735C 4C3A 460D FCCA 4DDE 10F7 00FF 835A * remotes/davidhildenbrand/tags/s390x-tcg-2019-09-23: (30 commits) tests/tcg: target/s390x: Test MVC tests/tcg: target/s390x: Test MVO s390x/tcg: MVO: Fault-safe handling s390x/tcg: MVST: Fault-safe handling s390x/tcg: MVZ: Fault-safe handling s390x/tcg: MVN: Fault-safe handling s390x/tcg: MVCIN: Fault-safe handling s390x/tcg: NC: Fault-safe handling s390x/tcg: XC: Fault-safe handling s390x/tcg: OC: Fault-safe handling s390x/tcg: MVCLU: Fault-safe handling s390x/tcg: MVC: Fault-safe handling on destructive overlaps s390x/tcg: MVCS/MVCP: Use access_memmove() s390x/tcg: Fault-safe memmove s390x/tcg: Fault-safe memset s390x/tcg: Always use MMU_USER_IDX for CONFIG_USER_ONLY s390x/tcg: MVST: Fix storing back the addresses to registers s390x/tcg: MVST: Check for specification exceptions s390x/tcg: MVCS/MVCP: Properly wrap the length s390x/tcg: MVCOS: Lengths are 32 bit in 24/31-bit mode ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-23s390x/tcg: MVO: Fault-safe handlingDavid Hildenbrand
Each operand can have a maximum length of 16. Make sure to prepare all reads/writes before writing. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVST: Fault-safe handlingDavid Hildenbrand
Access at most single pages and document why. Using the access helpers might over-indicate watchpoints within the same page, I guess we can live with that. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVZ: Fault-safe handlingDavid Hildenbrand
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVN: Fault-safe handlingDavid Hildenbrand
We can process a maximum of 256 bytes, crossing two pages. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>
2019-09-23s390x/tcg: MVCIN: Fault-safe handlingDavid Hildenbrand
We can process a maximum of 256 bytes, crossing two pages. Calculate the accessed range upfront - src is accessed right-to-left. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com>