aboutsummaryrefslogtreecommitdiff
path: root/target/ppc/translate.c
AgeCommit message (Collapse)Author
2019-03-12target/ppc: improve avr64_offset() and use it to simplify ↵Mark Cave-Ayland
get_avr64()/set_avr64() By using the VsrD macro in avr64_offset() the same offset calculation can be used regardless of the host endian. This allows get_avr64() and set_avr64() to be simplified accordingly. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-6-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: introduce single fpr_offset() functionMark Cave-Ayland
Instead of having multiple copies of the offset calculation logic, move it to a single fpr_offset() function. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190307180520.13868-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-03-12target/ppc: Implement large decrementer support for TCGSuraj Jitindar Singh
Prior to POWER9 the decrementer was a 32-bit register which decremented with each tick of the timebase. From POWER9 onwards the decrementer can be set to operate in a mode called large decrementer where it acts as a n-bit decrementing register which is visible as a 64-bit register, that is the value of the decrementer is sign extended to 64 bits (where n is implementation dependant). The mode in which the decrementer operates is controlled by the LPCR_LD bit in the logical paritition control register (LPCR). >From POWER9 onwards the HDEC (hypervisor decrementer) was enlarged to h-bits, also sign extended to 64 bits (where h is implementation dependant). Note this isn't configurable and is always enabled. On POWER9 the large decrementer and hdec are both 56 bits, as represented by the lrg_decr_bits cpu class property. Since they are the same size we only add one property for now, which could be extended in the case they ever differ in the future. We also add the lrg_decr_bits property for POWER5+/7/8 since it is used to determine the size of the hdec, which is only generated on the POWER5+ processor and later. On these processors it is 32 bits. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190301024317.22137-2-sjitindarsingh@gmail.com> [dwg: Small style fixes] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26target/ppc: Add POWER9 exception modelBenjamin Herrenschmidt
And use it to get the correct HILE bit in HID0 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215161648.9600-7-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26target/ppc: Fix support for "STOP light" states on POWER9Benjamin Herrenschmidt
STOP must act differently based on PSSCR:EC on POWER9. When set, it acts like the P7/P8 power management instructions and wake up at 0x100 based on the wakeup conditions in LPCR. When PSSCR:EC is clear however it will wakeup at the next instruction after STOP (if EE is clear) or take the corresponding interrupts (if EE is set). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215161648.9600-4-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-26target/ppc: Fix nip on power management instructionsBenjamin Herrenschmidt
Those instructions currently raise an exception from within the helper. This tends to result in a bogus nip value in the env context (typically the beginning of the TB). Such a helper needs a gen_update_nip() first. This fixes it with a different approach which is to throw the exception from translate.c instead of the helper using gen_exception_nip() which does the right thing. Exception EXCP_HLT is also used instead of POWERPC_EXCP_STOP to effectively exit from the CPU execution loop. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg : modified the commit log to comment the use of EXCP_HLT instead of POWERPC_EXCP_STOP] Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190215161648.9600-2-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert VMX logical instructions to use vector operationsMark Cave-Ayland
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-17ppc: fix crash during branch steppingRoman Kapl
The PPC BRANCH exception could bubble up, but this is an QEMU internal exception and QEMU then crased. Instead it should trigger TRACE exception, according to PPC 2.07 book. It could happen only when using branch stepping, which is not commonly used. Change gen_prep_dbgex do do trigger TRACE. The excp, argument is now removed, since the type of exception can be inferred from the singlestep_enabled flags. removed the guards around gen_exception, since they are unnecessary. Fixes: 0e3bf48909 ("ppc: add DBCR based debugging"). Signed-off-by: Roman Kapl <rka@sysgo.com> Message-Id: <20190212121255.2279-1-rka@sysgo.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-17target/ppc: Fix msync to do what hardware doesBALATON Zoltan
According to BookE docs, invalid bits (while undefined behaviour) should not raise exception but be ignored. This seems to be implementation dependent though and QEMU currently does what e500 CPUs do and raise exception for invalid bits. Unfortunately some versions of libstdc++ (and so all programs compiled with it) have lwsync on PPC440 which is invalid but on real hardware it's just executed as msync ignoring the invalid bits (maybe that's why it got undetected) but they fail on QEMU. This patch changes invalid mask of msync to allow these programs to run but keep generating exception on e500 cores to follow what hardware does. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: move FP and VMX registers into aligned vsr register arrayMark Cave-Ayland
The VSX register array is a block of 64 128-bit registers where the first 32 registers consist of the existing 64-bit FP registers extended to 128-bit using new VSR registers, and the last 32 registers are the VMX 128-bit registers as show below: 64-bit 64-bit +--------------------+--------------------+ | FP0 | | VSR0 +--------------------+--------------------+ | FP1 | | VSR1 +--------------------+--------------------+ | ... | ... | ... +--------------------+--------------------+ | FP30 | | VSR30 +--------------------+--------------------+ | FP31 | | VSR31 +--------------------+--------------------+ | VMX0 | VSR32 +-----------------------------------------+ | VMX1 | VSR33 +-----------------------------------------+ | ... | ... +-----------------------------------------+ | VMX30 | VSR62 +-----------------------------------------+ | VMX31 | VSR63 +-----------------------------------------+ In order to allow for future conversion of VSX instructions to use TCG vector operations, recreate the same layout using an aligned version of the existing vsr register array. Since the old fpr and avr register arrays are removed, the existing callers must also be updated to use the correct offset in the vsr register array. This also includes switching the relevant VMState fields over to using subarrays to make sure that migration is preserved. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: switch FPR, VMX and VSX helpers to access data directly from cpu_envMark Cave-Ayland
Instead of accessing the FPR, VMX and VSX registers through static arrays of TCGv_i64 globals, remove them and change the helpers to load/store data directly within cpu_env. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: introduce get_avr64() and set_avr64() helpers for VMX register ↵Mark Cave-Ayland
access These helpers allow us to move AVR register values to/from the specified TCGv_i64 argument. To prevent VMX helpers accessing the cpu_avr{l,h} arrays directly, add extra TCG temporaries as required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: introduce get_fpr() and set_fpr() helpers for FP register accessMark Cave-Ayland
These helpers allow us to move FP register values to/from the specified TCGv_i64 argument in the VSR helpers to be introduced shortly. To prevent FP helpers accessing the cpu_fpr array directly, add extra TCG temporaries as required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-12-21target/ppc: tcg: Implement addex instructionSuraj Jitindar Singh
Implement the addex instruction introduced in ISA V3.00 in qemu tcg. The add extended using alternate carry bit (addex) instruction performs the same operation as the add extended (adde) instruction, but using the overflow (ov) field in the fixed point exception register (xer) as the carry in and out instead of the carry (ca) field. The instruction has a Z23-form, not an XO form, as follows: ------------------------------------------------------------------ | 31 | RT | RA | RB | CY | 170 | 0 | ------------------------------------------------------------------ 0 6 11 16 21 23 31 32 However since the only valid form of the instruction defined so far is CY = 0, we can treat this like an XO form instruction. There is no dot form (addex.) of the instruction and the summary overflow (so) bit in the xer is not modified by this instruction. For simplicity we reuse the gen_op_arith_add function and add a function argument to specify where the carry in input should come from and the carry out output be stored (note must be the same location). Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-11-08This patch fixes processing of rfi instructions in icount mode.Maria Klimushenkova
In this mode writing to interrupt/peripheral state is controlled by can_do_io flag. This flag must be set explicitly before helper function invocation. Signed-off-by: Maria Klimushenkova <maria.klimushenkova@ispras.ru> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-11-08target/ppc: fix mtmsr instruction for icountPavel Dovgalyuk
This patch fixes processing of mtmsr instructions in icount mode. In this mode writing to interrupt/peripheral state is controlled by can_do_io flag. This flag must be set explicitly before helper function invocation. Signed-off-by: Maria Klimushenkova <maria.klimushenkova@ispras.ru> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-11-08target/ppc: add external PID supportRoman Kapl
External PID is a mechanism present on BookE 2.06 that enables application to store/load data from different address spaces. There are special version of some instructions, which operate on alternate address space, which is specified in the EPLC/EPSC regiser. This implementation uses two additional MMU modes (mmu_idx) to provide the address space for the load and store instructions. The QEMU TLB fill code was modified to recognize these MMU modes and use the values in EPLC/EPSC to find the proper entry in he PPC TLB. These two QEMU TLBs are also flushed on each write to EPLC/EPSC. Following instructions are implemented: dcbfep dcbstep dcbtep dcbtstep dcbzep dcbzlep icbiep lbepx ldepx lfdepx lhepx lwepx stbepx stdepx stfdepx sthepx stwepx. Following vector instructions are not: evlddepx evstddepx lvepx lvepxl stvepx stvepxl. Signed-off-by: Roman Kapl <rka@sysgo.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-10-18target/ppc: Convert to HAVE_CMPXCHG128 and HAVE_ATOMIC128Richard Henderson
Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-08-21ppc: add DBCR based debuggingRoman Kapl
Add support for DBCR (debug control register) based debugging as used on BookE ppc. So far supports only branch and single-step events, but these are the important ones. GDB in Linux guest can now do single-stepping. Signed-off-by: Roman Kapl <rka@sysgo.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Relax reserved bitmask of indexed store instructionsBALATON Zoltan
The PPC440 User Manual says that if bit 31 is set, the contents of CR[CR0] are undefined for indexed store instructions but this form is not invalid. Other PPC variants confirming to recent ISA where this bit may be reserved should ignore reserved bits and not raise invalid instruction exception. In particular, MorphOS has an stwx instruction with bit 31 set and fails to boot currently because of this. With this patch it gets further. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: set is_jmp on ppc_tr_breakpoint_checkEmilio G. Cota
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert to TranslatorOps", 2018-02-16). Fix it by setting is_jmp, so that we break from the translation loop as originally intended. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_st_atomicRichard Henderson
The store twin case was stubbed out. For now, implement it only within a serial context, forcing parallel execution to synchronize. It would be possible to implement with a cmpxchg loop, if we care, but the loose alignment requirements (simply no crossing 32-byte boundary) might send us back to the serial context anyway. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Implement the rest of gen_ld_atomicRichard Henderson
These cases were stubbed out. For now, implement them only within a serial context, forcing parallel execution to synchronize. It would be possible to implement these with cmpxchg loops, if we care. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic min/max helpersRichard Henderson
These operations were previously unimplemented for ppc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use MO_ALIGN for EXIWX and ECOWXRichard Henderson
This avoids the need for gen_check_align entirely. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_st_atomicRichard Henderson
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_ld_atomicRichard Henderson
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Split out gen_load_lockedRichard Henderson
Leave only the minimal amount of code within the LDAR macro, moving the rest of the code into gen_load_locked. Use MO_ALIGN and remove the explicit call to gen_check_align. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Tidy gen_conditional_storeRichard Henderson
Leave only the minimal amount of code within the STCX macro, moving the rest of the code into gen_conditional_store. Remove the explicit call to gen_check_align; the matching LDAX will have already checked alignment, and we verify the same address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Remove POWERPC_EXCP_STCXRichard Henderson
Always use the gen_conditional_store implementation that uses atomic_cmpxchg. Make sure and clear reserve_addr across most interrupts crossing the cpu_loop. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic cmpxchg for STQCXRichard Henderson
When running in a parallel context, we must use a helper in order to perform the 128-bit atomic operation. When running in a serial context, do the compare before the store. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic store for STQRichard Henderson
Section 1.4 of the Power ISA v3.0B states that this insn is single-copy atomic. As we cannot (yet) issue 128-bit stores within TCG, use the generic helpers provided. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03target/ppc: Use atomic load for LQ and LQARXRichard Henderson
Section 1.4 of the Power ISA v3.0B states that both of these instructions are single-copy atomic. As we cannot (yet) issue 128-bit loads within TCG, use the generic helpers provided. Since TCG cannot (yet) return a 128-bit value, add a slot within CPUPPCState for returning the high half of a 128-bit return value. This solution is preferred to the helper assigning to architectural registers directly, as it avoids clobbering all TCG live values. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-21target/ppc: Add missing opcode for icbt on PPC440BALATON Zoltan
According to PPC440 User Manual PPC440 has multiple opcodes for icbt instruction: one for compatibility with older cores and two 440 specific opcodes one of which is defined in BookE. QEMU only implements two of these, add the missing one. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-12target/ppc: extend eieio for POWER9Cédric Le Goater
POWER9 introduced a new variant of the eieio instruction using bit 6 as a hint to tell the CPU it is a store-forwarding barrier. The usage of this eieio extension was recently added in Linux 4.17 which activated the "support for a store forwarding barrier at kernel entry/exit". Unfortunately, it is not possible to insert this new eieio instruction without considerable change in ppc_tr_translate_insn(). So instead we loosen the QEMU eieio instruction mask and modify the gen_eieio() helper to test for bit6. On non-POWER9 CPUs, the bit6 is just ignored but a warning is emitted as this is not an instruction software should be using. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-12target/ppc: Use proper logging function for possible guest errorsThomas Huth
fprintf() and qemu_log_separate() are frowned upon these days for printing logging information in QEMU. Accessing the wrong SPRs indicates wrong guest behaviour in most cases, and we've got a proper way to log such situations, which is the qemu_log_mask(LOG_GUEST_ERROR, ...) function. So use this function now for logging the bad SPR accesses instead. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-01tcg: Pass tb and index to tcg_gen_exit_tb separatelyRichard Henderson
Do the cast to uintptr_t within the helper, so that the compiler can type check the pointer argument. We can also do some more sanity checking of the index argument. Reviewed-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-18target/ppc: Honor CPU_DUMP_FPURichard Henderson
Cc: Alexander Graf <agraf@suse.de> Cc: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-14Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* Don't silently truncate extremely long words in the command line * dtc configure fixes * MemoryRegionCache second try * Deprecated option removal * add support for Hyper-V reenlightenment MSRs # gpg: Signature made Fri 11 May 2018 13:33:46 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (29 commits) rename included C files to foo.inc.c, remove osdep.h pc-dimm: fix error messages if no slots were defined build: Silence dtc directory creation shippable: Remove Debian 8 libfdt kludge configure: Display if libfdt is from system or git configure: Really use local libfdt if the system one is too old i386/kvm: add support for Hyper-V reenlightenment MSRs qemu-doc: provide details of supported build platforms qemu-options: Remove deprecated -no-kvm-irqchip qemu-options: Remove deprecated -no-kvm-pit-reinjection qemu-options: Bail out on unsupported options instead of silently ignoring them qemu-options: Remove remainders of the -tdf option qemu-options: Mark -virtioconsole as deprecated target/i386: sev: fix memory leaks opts: don't silently truncate long option values opts: don't silently truncate long parameter keys accel: use g_strsplit for parsing accelerator names update-linux-headers: drop hyperv.h qemu-thread: always keep the posix wrapper layer exec: reintroduce MemoryRegion caching ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-05-11rename included C files to foo.inc.c, remove osdep.hPaolo Bonzini
osdep.h is only needed for files that are compiled directly. Remove it from included C source files, and rename them to *.inc.c so that scripts/clean-includes knows to skip them. Cc: Eric Blake <eblake@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-05-09translator: merge max_insns into DisasContextBaseEmilio G. Cota
While at it, use int for both num_insns and max_insns to make sure we have same-type comparisons. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Michael Clark <mjc@sifive.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-04target/ppc: add basic support for PTCR on POWER9Cédric Le Goater
The Partition Table Control Register (PTCR) is a hypervisor privileged SPR. It contains the host real address of the Partition Table and its size. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-04-27target/ppc: Get rid of POWERPC_MMU_VER() macrosDavid Gibson
These macros were introduced to deal with the fact that the mmu_model field has bit flags mixed in with what's otherwise an enum of various mmu types. We've now eliminated all those flags except for one, and that one - POWERPC_MMU_64 - is already included/compared in the MMU_VER macros. So, we can get rid of those macros and just directly compare mmu_model values in the places it was used. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Fix reserved bit mask of dstst instructionBALATON Zoltan
According to the Vector/SIMD extension documentation bit 6 that is currently masked is valid (listed as transient bit) but bits 7 and 8 should be reserved instead. Fix the mask to match this. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-04-10target/ppc: Initialize lazy_tlb_flush correctlyDavid Gibson
ppc_tr_init_disas_context() correctly sets lazy_tlb_flush to true on certain CPU models. However, it leaves it uninitialized, instead of setting it to false on all others. It wasn't caught before now because we didn't have examples in the tests that exercised this path. However it can now be caught using clang's undefined behaviour sanitizer and the sam460ex board. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-03-18target/ppc: fix tlbsync to check privilege level depending on GTSECédric Le Goater
tlbsync also needs to check the Guest Translation Shootdown Enable (GTSE) bit in the Logical Partition Control Register (LPCR) to determine at which privilege level it is running. See commit c6fd28fd573d ("target/ppc: Update tlbie to check privilege level based on GTSE") Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-02-16target/ppc: convert to TranslatorOpsEmilio G. Cota
A few changes worth noting: - Didn't migrate ctx->exception to DISAS_* since the exception field is in many cases architecturally relevant. - Moved the cross-page check from the end of translate_insn to tb_start. - Removed the exit(1) after a TCG temp leak; changed the fprintf there to qemu_log. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-02-16target/ppc: convert to DisasContextBaseEmilio G. Cota
A couple of notes: - removed ctx->nip in favour of base->pc_next. Yes, it is annoying, but didn't want to waste its 4 bytes. - ctx->singlestep_enabled does a lot more than base.singlestep_enabled; this confused me at first. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-01-20target/ppc: add support for hypervisor doorbells on book3s CPUsCédric Le Goater
The hypervisor doorbells are used by skiboot and Linux on POWER9 processors to wake up secondaries. This adds processor control support to the Server architecture by reusing the Embedded support. They are very similar, only the bits definition of the CPU identifier differ. Still to be done is message broadcast to all threads of the same processor. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-01-20target-ppc: optimize cmp translationpbonzini@redhat.com
We know that only one bit (in addition to SO) is going to be set in the condition register, so do two movconds instead of three setconds, three shifts and two ORs. For ppc64-linux-user, the code size reduction is around 5% and the performance improvement slightly less than 10%. For softmmu, the improvement is around 5%. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>