aboutsummaryrefslogtreecommitdiff
path: root/target-ppc/translate.c
AgeCommit message (Collapse)Author
2013-07-23cpu: Move singlestep_enabled field from CPU_COMMON to CPUStateAndreas Färber
Prepares for changing cpu_single_step() argument to CPUState. Acked-by: Michael Walle <michael@walle.cc> (for lm32) Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09target-ppc: Change gen_intermediate_code_internal() argument to PowerPCCPUAndreas Färber
Also use bool type while at it. Prepares for moving singlestep_enabled field to CPUState. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28cpu: Turn cpu_dump_{state,statistics}() into CPUState hooksAndreas Färber
Make cpustats monitor command available unconditionally. Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec() arguments to CPUState. Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28kvm: Change cpu_synchronize_state() argument to CPUStateAndreas Färber
Change Monitor::mon_cpu to CPUState as well. Reviewed-by: liguang <lig.fnst@cn.fujitsu.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-08PPC: Depend behavior of cmp instructions only on instruction encodingAlexander Graf
When running an L=1 cmp instruction on a 64bit PPC CPU with SF off, it still behaves identical to what it does when SF is on. Remove the implicit difference in the code. Also, on most 32bit CPUs we should always treat the compare as 32bit compare, as the CPU will ignore the L bit. This is not true for e500mc, but that's up for a different patch. Reported-by: Torbjorn Granlund <tg@gmplib.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-08PPC: Fix rldclAlexander Graf
The implementation for rldcl tried to always fetch its parameters from the opcode, even though the opcode was already passed in in decoded and different forms. Use the parameters instead, fixing rldcl. Reported-by: Torbjorn Granlund <tg@gmplib.org> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-06target-ppc: Fix invalid SPR read/write warningsAnton Blanchard
Invalid and privileged SPR warnings currently print the wrong address. While fixing that, also make it clear that we are printing both the decimal and hexadecimal SPR number. Before: Trying to read invalid spr 896 380 at 0000000000000714 After: Trying to read invalid spr 896 (0x380) at 0000000000000710 Signed-off-by: Anton Blanchard <anton@au1.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-27target-ppc: slightly optimize lfiwaxAurelien Jarno
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2013-04-26target-ppc: add support for extended mtfsf/mtfsfi formsAurelien Jarno
Power ISA 2.05 adds support for extended mtfsf/mtfsfi form, with a new W field to select the upper part of the FPCSR register. For that the helper is changed to handle 64-bit input values and mask with up to 16 bits. The mtfsf/mtfsfi instructions do not have the W bit marked as invalid anymore. Instead this is checked in the helper, which therefore needs to access to the insns/insns_flags2. They are added in the DisasContext struct. Finally change all accesses to the opcode fields through extract helpers, prefixed with FP for consistency. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate store doubleword pair instructionsAurelien Jarno
Needed for Power ISA version 2.05 compliance. The check for odd register pairs is done using the invalid bits. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate load doubleword pair instructionsAurelien Jarno
Needed for Power ISA version 2.05 compliance. The check for odd register pairs is done using the invalid bits. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate lfiwax instructionAurelien Jarno
Needed for Power ISA version 2.05 compliance. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> [agraf: fix tcg debug error] Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate fcpsgn instructionAurelien Jarno
Needed for Power ISA version 2.05 compliance. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate prtyw and prtyd instructionsAurelien Jarno
Needed for Power ISA version 2.05 compliance. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> [agraf: fix 32-bit host compile, simplify code] Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: emulate cmpb instructionAurelien Jarno
Needed for Power ISA version 2.05 compliance. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: optimize fabs, fnabs, fnegAurelien Jarno
fabs, fnabs and fneg are just flipping the bit sign of an FP register, this can be implemented in TCG instead of using softfloat. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: Fix narrow-mode add/sub carry outputRichard Henderson
Broken in b5a73f8d8a57e940f9bbeb399a9e47897522ee9a, the carry itself was fixed in 79482e5ab38a05ca8869040b0d8b8f451f16ff62. But we still need to produce the full 64-bit addition. Simplify the conditions at the top of the functions for when we need a new temporary. Only plain addition is important enough to warrent avoiding the temporary, and the extra tcg move op that would come with it. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-04-26target-ppc: fix nego and subf*o instructionsAurelien Jarno
The overflow computation of nego and subf*o instructions has been broken in commit ffe30937. Contrary to other targets, the instruction is subtract from an not subtract on PowerPC. This patch fixes the issue by using the correct argument in the xor computation. Thanks to Peter Maydell for the hint. With this change the PPC emulation passes the Gwenole Beauchesne testsuite again. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for tlbieRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for addressesRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for comparisonsRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for branchesRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Fix add and subf carry generation in narrow modeRichard Henderson
The set of computations used in b5a73f8d8a57e940f9bbeb399a9e47897522ee9a are only valid if the current word size == target_long size. This failed to take ppc64 in 32-bit (narrow) mode into account. Add a NARROW_MODE macro to avoid conditional compilation. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Remove vestigial PowerPC 620 supportDavid Gibson
The PowerPC 620 was the very first 64-bit PowerPC implementation, but hardly anyone ever actually used the chips. qemu notionally supports the 620, but since we don't actually have code to implement the segment table, the support is broken (quite likely in other ways too). This patch, therefore, removes all remaining pieces of 620 support, to stop it cluttering up the platforms we actually care about. This includes removing support for the ASR register, used only on segment table based machines. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-12cpu: Move halted and interrupt_request fields to CPUStateAndreas Färber
Both fields are used in VMState, thus need to be moved together. Explicitly zero them on reset since they were located before breakpoints. Pass PowerPCCPU to kvmppc_handle_halt(). Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-03gen-icount.h: Rename gen_icount_start/end to gen_tb_start/endPeter Maydell
The gen_icount_start/end functions are now somewhat misnamed since they are useful for generic "start/end of TB" code, used for more than just icount. Rename them to gen_tb_start/end. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-25target-ppc: Fix SUBFE carryRichard Henderson
While ~T0+T1+CF = T1-T0+CF-1 is true for the low 32-bits, it does not produce the correct carry-out to bit 33. Do exactly what the manual says. Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-02-23target-ppc: Compute mullwo without branchesRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Compute arithmetic shift carry without branchesRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Implement neg in terms of subfRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Use add2 for carry generationRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Compute addition carry with setcondRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Compute addition overflow without branchesRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Use setcond in gen_op_cmpRichard Henderson
Which means that callers need not copy data into local tmps. Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Split out SO, OV, CA fields from XERRichard Henderson
In preparation for more efficient setting of these fields. Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-23target-ppc: Use mul*2 in mulh* insnsRichard Henderson
Cc: Alexander Graf <agraf@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-01target-ppc: Fix build for PPC_DEBUG_DISASAndreas Färber
In r5949 / 76db3ba44ee8db671f804755f13b016eefd13288 (target-ppc: memory load/store rework) variable little_endian was replaced with ctx.le_mode. Update the debug code. Signed-off-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-02-01PPC: Unify dcbzl code pathAlexander Graf
The bit that makes a dcbz instruction a dcbzl instruction was declared as reserved in ppc32 ISAs. However, hardware simply ignores the bit, making code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4. Thus, mark the bit as unreserved so that we properly emulate a simple dcbz in case we're running on non-G5s. While at it, also refactor the code to check the 970 special case during runtime. This way we don't need to differenciate between a 970 dcbz and any other dcbz anymore. We also allow for future improvements to add e500mc dcbz handling. Reported-by: Amadeusz Sławiński <amade@asmblr.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2012-12-19misc: move include files to include/qemu/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19exec: move include files to include/exec/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19build: kill libdis, move disassemblers to disas/Paolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-08TCG: Use gen_opc_instr_start from context instead of global variable.Evgeny Voevodin
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-08TCG: Use gen_opc_icount from context instead of global variable.Evgeny Voevodin
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-12-08TCG: Use gen_opc_pc from context instead of global variable.Evgeny Voevodin
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-26PPC: Fix missing TRACE exceptionJulio Guerra
This patch fixes bug 1031698 : https://bugs.launchpad.net/qemu/+bug/1031698 If we look at the (truncated) translation of the conditional branch instruction in the test submitted in the bug post, the call to the exception helper is missing in the "bne-false" chunk of translated code : IN: bne- 0x1800278 OUT: 0xb544236d: jne 0xb5442396 0xb5442373: mov %ebp,(%esp) 0xb5442376: mov $0x44,%ebx 0xb544237b: mov %ebx,0x4(%esp) 0xb544237f: mov $0x1800278,%ebx 0xb5442384: mov %ebx,0x25c(%ebp) 0xb544238a: call 0x827475a ^^^^^^^^^^^^^^^^^^ 0xb5442396: mov %ebp,(%esp) 0xb5442399: mov $0x44,%ebx 0xb544239e: mov %ebx,0x4(%esp) 0xb54423a2: mov $0x1800270,%ebx 0xb54423a7: mov %ebx,0x25c(%ebp) Indeed, gen_exception(ctx, excp) called by gen_goto_tb (called by gen_bcond) changes ctx->exception's value to excp's : gen_bcond() { gen_goto_tb(ctx, 0, ctx->nip + li - 4); /* ctx->exception value is POWERPC_EXCP_BRANCH */ gen_goto_tb(ctx, 1, ctx->nip); /* ctx->exception now value is POWERPC_EXCP_TRACE */ } Making the following gen_goto_tb()'s test false during the second call : if ((ctx->singlestep_enabled & (CPU_BRANCH_STEP | CPU_SINGLE_STEP)) && ctx->exception == POWERPC_EXCP_BRANCH /* false...*/) { target_ulong tmp = ctx->nip; ctx->nip = dest; /* ... and this is the missing call */ gen_exception(ctx, POWERPC_EXCP_TRACE); ctx->nip = tmp; } So the patch simply adds the missing matching case, fixing our problem. Signed-off-by: Julio Guerra <guerr@julio.in> Signed-off-by: Alexander Graf <agraf@suse.de>
2012-11-17TCG: Use gen_opc_buf from context instead of global variable.Evgeny Voevodin
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-17TCG: Use gen_opc_ptr from context instead of global variable.Evgeny Voevodin
Signed-off-by: Evgeny Voevodin <e.voevodin@samsung.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-10disas: avoid using cpu_single_envBlue Swirl
Pass around CPUArchState instead of using global cpu_single_env. Signed-off-by: Blue Swirl <blauwirbel@gmail.com> Acked-by: Richard Henderson <rth@twiddle.net> Acked-by: Aurelien Jarno <aurelien@aurel32.net> Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
2012-11-01target-ppc: Extend FPU state for newer POWER CPUsDavid Gibson
This patch adds some extra FPU state to CPUPPCState. Specifically, fpscr is extended to a target_ulong bits, since some recent (64 bit) CPUs now have more status bits than fit inside 32 bits. Also, we add the 32 VSR registers present on CPUs with VSX (these extend the standard FP regs, which together with the Altivec/VMX registers form a 64 x 128bit register file for VSX). We don't actually support the instructions using these extra registers in TCG yet, but we still need a place to store the state so we can sync it with KVM and savevm/loadvm it. This patch updates the savevm code to not fail on the extended state, but also does not actually save it - that's a project for another patch. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2012-09-27Emit debug_insn for CPU_LOG_TB_OP_OPT as well.Richard Henderson
For all targets that currently call tcg_gen_debug_insn_start, add CPU_LOG_TB_OP_OPT to the condition that gates it. This is useful for comparing optimization dumps, when the pre-optimization dump is merely noise. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>