aboutsummaryrefslogtreecommitdiff
path: root/target/ppc/translate
AgeCommit message (Collapse)Author
2019-07-02target/ppc: introduce GEN_VSX_HELPER_R2 macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_R2 macro which performs the decode based upon rD and rB at translation time. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-12-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce GEN_VSX_HELPER_R3 macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_R3 macro which performs the decode based upon rD, rA and rB at translation time. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-11-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce GEN_VSX_HELPER_X1 macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_X1 macro which performs the decode based upon xB at translation time. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-10-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce GEN_VSX_HELPER_X2_AB macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_X2_AB macro which performs the decode based upon xA and xB at translation time. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-9-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce GEN_VSX_HELPER_X2 macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_X2 macro which performs the decode based upon xT and xB at translation time. With the previous change to the xscvqpdp generator and helper functions the opcode parameter is no longer required in the common case and can be removed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-8-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce separate generator and helper for xscvqpdpMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new generator and helper function which perform the decode based upon xT and xB at translation time. The xscvqpdp helper is the only 2 parameter xT/xB implementation that requires the opcode to be passed as an additional parameter, so handling this separately allows us to optimise the conversion in the next commit. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-7-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce GEN_VSX_HELPER_X3 macro to fpu_helper.cMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new GEN_VSX_HELPER_X3 macro which performs the decode based upon xT, xA and xB at translation time. With the previous changes to the VSX_CMP generator and helper macros the opcode parameter is no longer required in the common case and can be removed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-6-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: introduce separate VSX_CMP macro for xvcmp* instructionsMark Cave-Ayland
Rather than perform the VSR register decoding within the helper itself, introduce a new VSX_CMP macro which performs the decode based upon xT, xA and xB at translation time. Subsequent commits will make the same changes for other instructions however the xvcmp* instructions are different in that they return a set of flags to be optionally written back to the crf[6] register. Move this logic from the helper function to the generator function, along with the float_status update. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-5-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-06-12target/ppc: Use tcg_gen_gvec_bitselRichard Henderson
Replace the target-specific implementation of XXSEL. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190603164927.8336-1-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-06-12target/ppc: Fix lxvw4x, lxvh8x and lxvb16xAnton Blanchard
During the conversion these instructions were incorrectly treated as stores. We need to use set_cpu_vsr* and not get_cpu_vsr*. Fixes: 8b3b2d75c7c0 ("introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers for VSR register access") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Tested-by: Greg Kurz <groug@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Message-Id: <20190524065345.25591-1-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Use vector variable shifts for VSL, VSR, VSRARichard Henderson
The gvec expanders take care of masking the shift amount against the element width. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190518191430.21686-2-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix xvabs[sd]p, xvnabs[sd]p, xvneg[sd]p, xvcpsgn[sd]pAnton Blanchard
We were using set_cpu_vsr*() when we should have used get_cpu_vsr*(). Fixes: 8b3b2d75c7c0 ("introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers for VSR register access") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190509104912.6b754dff@kryten> Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Optimise VSX_LOAD_SCALAR_DS and VSX_VECTOR_LOAD_STOREAnton Blanchard
A few small optimisations: In VSX_LOAD_SCALAR_DS() we can don't need to read the VSR via get_cpu_vsrh(). Split VSX_VECTOR_LOAD_STORE() into two functions. Loads only need to write the VSRs (set_cpu_vsr*()) and stores only need to read the VSRs (get_cpu_vsr*()) Thanks to Mark Cave-Ayland for the suggestions. Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190509103545.4a7fa71a@kryten> Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix xxspltibAnton Blanchard
xxspltib raises a VMX or a VSX exception depending on the register set it is operating on. We had a check, but it was backwards. Fixes: f113283525a4 ("target-ppc: add xxspltib instruction") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190509061713.69490488@kryten> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix xxbrq, xxbrwAnton Blanchard
Fix a typo in xxbrq and xxbrw where we put both results into the lower doubleword. Fixes: 8b3b2d75c7c0 ("introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers for VSR register access") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190507004811.29968-3-anton@ozlabs.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix xvxsigdpAnton Blanchard
Fix a typo in xvxsigdp where we put both results into the lower doubleword. Fixes: dd977e4f45cb ("target/ppc: Optimize x[sv]xsigdp using deposit_i64()") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190507004811.29968-1-anton@ozlabs.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-13target/ppc: Use tcg_gen_abs_i32Philippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20190423102145.14812-2-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-13tcg: Specify optional vector requirements with a listRichard Henderson
Replace the single opcode in .opc with a null-terminated array in .opt_opc. We still require that all opcodes be used with the same .vece. Validate the contents of this list with CONFIG_DEBUG_TCG. All tcg_gen_*_vec functions will check any list active during .fniv expansion. Swap the active list in and out as we expand other opcodes, or take control away from the front-end function. Convert all existing vector aware front ends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-26target/ppc: Style fixes for translate/spe-impl.inc.cDavid Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2019-04-26target/ppc: Style fixes for translate/vmx-impl.inc.cDavid Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2019-04-26target/ppc: Style fixes for translate/vsx-impl.inc.cDavid Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2019-04-26target/ppc: Style fixes for translate/fp-impl.inc.cDavid Gibson
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2019-03-29target/ppc: Fix QEMU crash with stxsdxGreg Kurz
I've been hitting several QEMU crashes while running a fedora29 ppc64le guest under TCG. Each time, this would occur several minutes after the guest reached login: Fedora 29 (Twenty Nine) Kernel 4.20.6-200.fc29.ppc64le on an ppc64le (hvc0) Web console: https://localhost:9090/ localhost login: tcg/tcg.c:3211: tcg fatal error This happens because a bug crept up in the gen_stxsdx() helper when it was converted to use VSR register accessors by commit 8b3b2d75c7c04 "target/ppc: introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers for VSR register access". The code creates a temporary, passes it directly to gen_qemu_st64_i64() and then to set_cpu_vrsh()... which looks like this was mistakenly coded as a load instead of a store. Reverse the logic: read the VSR to the temporary first and then store it to memory. Fixes: 8b3b2d75c7c0481544e277dad226223245e058eb Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <155371035249.2038502.12364252604337688538.stgit@bahia.lan> Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: Optimize x[sv]xsigdp using deposit_i64()Philippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20190309214255.9952-3-f4bug@amsat.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: Optimize xviexpdp() using deposit_i64()Philippe Mathieu-Daudé
The t0 tcg_temp register is now unused, remove it. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20190309214255.9952-2-f4bug@amsat.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: introduce vsr64_offset() to simplify get_cpu_vsr{l,h}() and ↵Mark Cave-Ayland
set_cpu_vsr{l,h}() Now that all VSX registers are stored in host endian order, there is no need to go via different accessors depending upon the register number. Instead we introduce vsr64_offset() and use it directly from within get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}(). This also allows us to rewrite avr64_offset() and fpr_offset() in terms of the new vsr64_offset() function to more clearly express the relationship between the VSX, FPR and VMX registers, and also remove vsrl_offset() which is no longer required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-8-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: improve avr64_offset() and use it to simplify ↵Mark Cave-Ayland
get_avr64()/set_avr64() By using the VsrD macro in avr64_offset() the same offset calculation can be used regardless of the host endian. This allows get_avr64() and set_avr64() to be simplified accordingly. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-6-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: introduce avr_full_offset() functionMark Cave-Ayland
All TCG vector operations require pointers to the base address of the vector rather than separate access to the top and bottom 64-bits. Convert the VMX TCG instructions to use a new avr_full_offset() function instead of avr64_offset() which can then itself be written as a simple wrapper onto vsr_full_offset(). This same function can also reused in cpu_avr_ptr() to avoid having more than one copy of the offset calculation logic. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-5-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: introduce single vsrl_offset() functionMark Cave-Ayland
Instead of having multiple copies of the offset calculation logic, move it to a single vsrl_offset() function. This commit also renames the existing get_vsr()/set_vsr() functions to get_vsrl()/set_vsrl() which better describes their purpose. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190307180520.13868-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vmin* and vmax* to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-18-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vadd*s and vsub*s to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-17-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Add helper_mfvscrRichard Henderson
This is required before changing the representation of the register. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-13-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Pass integer to helper_mtvscrRichard Henderson
We can re-use this helper elsewhere if we're not passing in an entire vector register. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-10-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert xxsel to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-9-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert xxspltw to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-8-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert xxspltib to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-7-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert VSX logical operations to vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-6-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vsplt[bhw] to use vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-5-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vspltis[bhw] to use vector operationsRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190215100058.20015-4-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vaddu[b,h,w,d] and vsubu[b,h,w,d] over to use vector ↵Mark Cave-Ayland
operations Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert VMX logical instructions to use vector operationsMark Cave-Ayland
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: move FP and VMX registers into aligned vsr register arrayMark Cave-Ayland
The VSX register array is a block of 64 128-bit registers where the first 32 registers consist of the existing 64-bit FP registers extended to 128-bit using new VSR registers, and the last 32 registers are the VMX 128-bit registers as show below: 64-bit 64-bit +--------------------+--------------------+ | FP0 | | VSR0 +--------------------+--------------------+ | FP1 | | VSR1 +--------------------+--------------------+ | ... | ... | ... +--------------------+--------------------+ | FP30 | | VSR30 +--------------------+--------------------+ | FP31 | | VSR31 +--------------------+--------------------+ | VMX0 | VSR32 +-----------------------------------------+ | VMX1 | VSR33 +-----------------------------------------+ | ... | ... +-----------------------------------------+ | VMX30 | VSR62 +-----------------------------------------+ | VMX31 | VSR63 +-----------------------------------------+ In order to allow for future conversion of VSX instructions to use TCG vector operations, recreate the same layout using an aligned version of the existing vsr register array. Since the old fpr and avr register arrays are removed, the existing callers must also be updated to use the correct offset in the vsr register array. This also includes switching the relevant VMState fields over to using subarrays to make sure that migration is preserved. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: switch FPR, VMX and VSX helpers to access data directly from cpu_envMark Cave-Ayland
Instead of accessing the FPR, VMX and VSX registers through static arrays of TCGv_i64 globals, remove them and change the helpers to load/store data directly within cpu_env. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: introduce get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}() helpers for ↵Mark Cave-Ayland
VSR register access These helpers allow us to move VSR register values to/from the specified TCGv_i64 argument. To prevent VSX helpers accessing the cpu_vsr array directly, add extra TCG temporaries as required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: introduce get_avr64() and set_avr64() helpers for VMX register ↵Mark Cave-Ayland
access These helpers allow us to move AVR register values to/from the specified TCGv_i64 argument. To prevent VMX helpers accessing the cpu_avr{l,h} arrays directly, add extra TCG temporaries as required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: introduce get_fpr() and set_fpr() helpers for FP register accessMark Cave-Ayland
These helpers allow us to move FP register values to/from the specified TCGv_i64 argument in the VSR helpers to be introduced shortly. To prevent FP helpers accessing the cpu_fpr array directly, add extra TCG temporaries as required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-12-21Changes requirement for "vsubsbs" instructionPaul A. Clarke
Changes requirement for "vsubsbs" instruction, which has been supported since ISA 2.03. (Please see section 5.9.1.2 of ISA 2.03) Reported-by: Paul A. Clarke <pc@us.ibm.com> Signed-off-by: Paul A. Clarke <pc@us.ibm.com> Signed-off-by: Leonardo Bras <leonardo@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-11-08target/ppc: add external PID supportRoman Kapl
External PID is a mechanism present on BookE 2.06 that enables application to store/load data from different address spaces. There are special version of some instructions, which operate on alternate address space, which is specified in the EPLC/EPSC regiser. This implementation uses two additional MMU modes (mmu_idx) to provide the address space for the load and store instructions. The QEMU TLB fill code was modified to recognize these MMU modes and use the values in EPLC/EPSC to find the proper entry in he PPC TLB. These two QEMU TLBs are also flushed on each write to EPLC/EPSC. Following instructions are implemented: dcbfep dcbstep dcbtep dcbtstep dcbzep dcbzlep icbiep lbepx ldepx lfdepx lhepx lwepx stbepx stdepx stfdepx sthepx stwepx. Following vector instructions are not: evlddepx evstddepx lvepx lvepxl stvepx stvepxl. Signed-off-by: Roman Kapl <rka@sysgo.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: Use non-arithmetic conversions for fp load/storeRichard Henderson
Memory operations have no side effects on fp state. The use of a "real" conversions between float64 and float32 would raise exceptions for SNaN and out-of-range inputs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-02-16target/ppc: convert to DisasContextBaseEmilio G. Cota
A couple of notes: - removed ctx->nip in favour of base->pc_next. Yes, it is annoying, but didn't want to waste its 4 bytes. - ctx->singlestep_enabled does a lot more than base.singlestep_enabled; this confused me at first. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>