aboutsummaryrefslogtreecommitdiff
path: root/tcg
AgeCommit message (Collapse)Author
2014-03-27tcg-arm: Avoid ldrd/strd for user-only emulationRichard Henderson
The arm ldrd/strd insns must cause alignment traps, whereas at least for armv7 ldr/str must handle unaligned operations. While this is hardly the only problem facing user-only emu, this solves one problem for i386 on armv7 emulation. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reported-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Convert to new ldst opcodesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Convert to new ldst helpersRichard Henderson
All of the helpers with the explicit big/little endian option require the return address as a parameter. Acquire this via a trampoline. Move the load of areg0 into the trampoline. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Tidy tcg_out_tlb_load interfaceRichard Henderson
Pass address registers explicitly, rather than as indicies of args[]. It's two argument registers either way. Use more TCGReg as appropriate. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Use TCGMemOp within qemu_ldst routinesRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Improve tcg_out_moviRichard Henderson
If bits 31:13 are zero, reduce the insn count by one. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Dont handle constant arguments to ext32 opsRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Don't handle remainderRichard Henderson
The generic fallback is exactly what we implemented. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Use intptr_t as appropriateRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Tidy call+jump patternsRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Fix tlb readRichard Henderson
We were computing the full address into %o0 and then not using it. Adjust some of the computation to rely less on having to pull immediate values into registers. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Fix ld64 for 32-bit modeRichard Henderson
Since were not using an annulled branch, we need to put a nop in the delay slot. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-14tcg-aarch64: Introduce tcg_out_insn_3405Richard Henderson
Cleaning up the implementation of tcg_out_movi at the same time. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support div, remRichard Henderson
Clean up multiply at the same time. For remainder, generic code will produce mul+sub, whereas we can implement with msub. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support muluh, mulshRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support add2, sub2Richard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support depositRichard Henderson
Also tidy the implementation of ubfm, sbfm, extr in order to share code. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Use tcg_out_insn for setcondRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support movcondRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support andc, orc, eqv, not, negRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Handle constant operands to and, or, xorRichard Henderson
Handle a simplified set of logical immediates for the moment. The way gcc and binutils do it, with 52k worth of tables, and a binary search depth of log2(5334) = 13, seems slow for the most common cases. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Handle constant operands to add, sub, and compareRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Implement mov with tcg_out_insnRichard Henderson
Avoid the magic numbers in the current implementation. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Introduce tcg_out_insn_3401Richard Henderson
This merges the implementation of tcg_out_addi and tcg_out_subi. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Convert shift insns to tcg_out_insnRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Introduce tcg_out_insnRichard Henderson
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction groups to the new scheme. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-08tcg-aarch64: Remove nop from qemu_st slow pathRichard Henderson
Commit 023261ef851b22a04f6c5d76da870051031757a6 failed to remove a nop that's no longer required. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Simplify tcg_out_ldst_9 encodingRichard Henderson
At first glance the code appears to be using 1's compliment encoding, a-la AArch32. Except that the constant is "off", creating a complicated split field 2's compliment encoding. Much clearer to just use a normal mask and shift. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Use intptr_t apropriatelyRichard Henderson
As opposed to tcg_target_long. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Remove the shift_imm parameter from tcg_out_cmpRichard Henderson
It was unused. Let's not overcomplicate things before we need them. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Hoist common argument loads in tcg_out_opRichard Henderson
This reduces the code size of the function significantly. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Don't handle mov/movi in tcg_out_opRichard Henderson
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Set ext based on TCG_OPF_64BITRichard Henderson
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Change all ext variables to TCGTypeRichard Henderson
We assert that the values for _I32 and _I64 are 0 and 1 respectively. This will make a couple of functions declared by tcg.c cleaner. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Remove redundant CPU_TLB_ENTRY_BITS checkRichard Henderson
Removed from other targets in 56bbc2f967ce185fa1c5c39e1aeb5b68b26242e9. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-02tcg: Fix typo in comment (dependancies -> dependencies)Stefan Weil
Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-02-21tcg/i386: Fix build for systems without working cpuid.h (MacOSX, Win32)Peter Maydell
Win32 doesn't have a cpuid.h, and MacOSX may have one but without the __cpuid() function we use, which means that commit 9d2eec20 broke the build for those platforms. Fix this by tightening up our configure cpuid.h check to test that the functions we need are present, and adding some missing #ifdef guards in tcg/i386/tcg-target.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use SHLX/SHRX/SARX instructionsRichard Henderson
These three-operand shift instructions do not require the shift count to be placed into ECX. This reduces the number of mov insns required, with the mere addition of a new register constraint. Don't attempt to get rid of the matching constraint, as that's impossible to manipulate with just a new constraint. In addition, constant shifts still need the matching constraint. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use ANDN instructionRichard Henderson
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C so we must handle constants in the implementation of andc. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Add tcg_out_vex_modrmRichard Henderson
Prepare for emitting BMI insns which require VEX encoding. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Move TCG_CT_CONST_* to tcg-target.cRichard Henderson
These are not needed by users of tcg-target.h. No need to recompile when we adjust them. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Add more identity simplificationsRichard Henderson
Recognize 0 operand to andc, and -1 operands to and, orc, eqv. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Optmize ANDC X,Y,Y to MOV X,0Richard Henderson
Like we already do for SUB and XOR. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Simply some logical ops to NOTRichard Henderson
Given, of course, an appropriate constant. These could be generated from the "canonical" operation for inversion on the guest, or via other optimizations. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Handle known-zeros masks for ANDCRichard Henderson
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: add known-zero bits compute for load opsAurelien Jarno
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: improve known-zero bits for 32-bit opsAurelien Jarno
The shl_i32 op might set some bits of the unused 32 high bits of the mask. Fix that by clearing the unused 32 high bits for all 32-bit ops except load/store which operate on tl values. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits optimizationAurelien Jarno
Known-zero bits optimization is a great idea that helps to generate more optimized code. However the current implementation only works in very few cases as the computed mask is not saved. Fix this to make it really working. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits for right shift opsAurelien Jarno
32-bit versions of sar and shr ops should not propagate known-zero bits from the unused 32 high bits. For sar it could even lead to wrong code being generated. Cc: qemu-stable@nongnu.org Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].Huw Davies
It's this that should be subtracted from 0x20 when converting to a right rotate. Cc: qemu-stable@nongnu.org Signed-off-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>