aboutsummaryrefslogtreecommitdiff
path: root/tcg/i386/tcg-target.c.inc
AgeCommit message (Collapse)Author
2023-10-22tcg/i386: Use tcg_use_softmmuRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-10-07tcg: Correct invalid mentions of 'softmmu' by 'system-mode'Philippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231004090629.37473-6-philmd@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-09-16tcg: Add tcg_out_tb_start backend hookRichard Henderson
This hook may emit code at the beginning of the TB. Suggested-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-09-15tcg: pass vece to tcg_target_const_match()Jiajie Chen
Pass vece to tcg_target_const_match() to allow correct interpretation of const args of vector ops. Signed-off-by: Jiajie Chen <c@jia.je> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230908022302.180442-4-c@jia.je> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Implement negsetcond_*Richard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Use shift in tcg_out_setcondRichard Henderson
For LT/GE vs zero, shift down the sign bit. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Clear dest first in tcg_out_setcond if possibleRichard Henderson
Using XOR first is both smaller and more efficient, though cannot be applied if it clobbers an input. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Use CMP+SBB in tcg_out_setcondRichard Henderson
Use the carry bit to optimize some forms of setcond. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Merge tcg_out_movcond{32,64}Richard Henderson
Pass a rexw parameter instead of duplicating the functions. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Merge tcg_out_setcond{32,64}Richard Henderson
Pass a rexw parameter instead of duplicating the functions. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Merge tcg_out_brcond{32,64}Richard Henderson
Pass a rexw parameter instead of duplicating the functions. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Allow immediate as input to deposit_*Richard Henderson
We can use MOVB and MOVW with an immediate just as easily as with a register input. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-24tcg/i386: Drop BYTEH deposits for 64-bitRichard Henderson
It is more useful to allow low-part deposits into all registers than to restrict allocation for high-byte deposits. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-08-12tcg/i386: Output %gs prefix in tcg_out_vex_opcRichard Henderson
Missing the segment prefix means that user-only fails to add guest_base for some 128-bit load/store. Fixes: 098d0fc10d2 ("tcg/i386: Support 128-bit load/store") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1763 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-07-23tcg/{i386, s390x}: Add earlyclobber to the op_add2's first outputIlya Leoshkevich
i386 and s390x implementations of op_add2 require an earlyclobber, which is currently missing. This breaks VCKSM in s390x guests. E.g., on x86_64 the following op: add2_i32 tmp2,tmp3,tmp2,tmp3,tmp3,tmp2 dead: 0 2 3 4 5 pref=none,0xffff is translated to: addl %ebx, %r12d adcl %r12d, %ebx Introduce a new C_N1_O1_I4 constraint, and make sure that earlyclobber of aliased outputs is honored. Cc: qemu-stable@nongnu.org Fixes: 82790a870992 ("tcg: Add markup for output requires new register") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230719221310.1968845-7-iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-06-05tcg: Add tlb_fast_offset to TCGContextRichard Henderson
Disconnect the layout of ArchCPU from TCG compilation. Pass the relative offset of 'env' and 'neg.tlb.f' as a parameter. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-30tcg/i386: Support 128-bit load/storeRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-23tcg/i386: Use host/cpuinfo.hRichard Henderson
Use the CPUINFO_* bits instead of the individual boolean variables that we had been using. Remove all of the init code that was moved over to cpuinfo-i386.c. Note that have_avx512* check both AVX512{F,VL}, as we had previously done during tcg_target_init. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add tlb_dyn_max_bits to TCGContextRichard Henderson
Disconnect guest tlb parameters from TCG compilation. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Add page_bits and page_mask to TCGContextRichard Henderson
Disconnect guest page size from TCG compilation. While this could be done via exec/target_page.h, we want to cache the value across multiple memory access operations, so we might as well initialize this early. The changes within tcg/ are entirely mechanical: sed -i s/TARGET_PAGE_BITS/s->page_bits/g sed -i s/TARGET_PAGE_MASK/s->page_mask/g Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Remove TARGET_LONG_BITS, TCG_TYPE_TLRichard Henderson
All uses can be infered from the INDEX_op_qemu_*_a{32,64}_* opcode being used. Add a field into TCGLabelQemuLdst to record the usage. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Adjust type of tlb_maskRichard Henderson
Because of its use on tgen_arithi, this value must be a signed 32-bit quantity, as that is what may be encoded in the insn. The truncation of the value to unsigned for 32-bit guests is done via the REX bit via 'trexw'. Removes the only uses of target_ulong from this tcg backend. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Conditionalize tcg_out_extu_i32_i64Richard Henderson
Since TCG_TYPE_I32 values are kept zero-extended in registers, via omission of the REXW bit, we need not extend if the register matches. This is already relied upon by qemu_{ld,st}. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Split INDEX_op_qemu_{ld,st}* for guest address sizeRichard Henderson
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits, as we need one or two host registers to represent the guest address. Create the new opcodes and update all users. Since we have not yet eliminated TARGET_LONG_BITS, only one of the two opcodes will ever be used, so we can get away with treating them the same in the backends. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Use atom_and_align_for_opcRichard Henderson
No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Introduce tcg_target_has_memory_bswapRichard Henderson
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Use full load/store helpers in user-only modeRichard Henderson
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Add have_atomic16Richard Henderson
Notice when Intel or AMD have guaranteed that vmovdqa is atomic. The new variable will also be used in generated code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-16tcg/i386: Set P_REXW in tcg_out_addi_ptrRichard Henderson
The REXW bit must be set to produce a 64-bit pointer result; the bit is disabled in 32-bit mode, so we can do this unconditionally. Fixes: 7d9e1ee424b0 ("tcg/i386: Adjust assert in tcg_out_addi_ptr") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1592 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1642 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/i386: Convert tcg_out_qemu_st_slow_pathRichard Henderson
Use tcg_out_st_helper_args. This eliminates the use of a tail call to the store helper. This may or may not be an improvement, depending on the call/return branch prediction of the host microarchitecture. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/i386: Convert tcg_out_qemu_ld_slow_pathRichard Henderson
Use tcg_out_ld_helper_args and tcg_out_ld_helper_ret. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/i386: Use indexed addressing for softmmu fast pathRichard Henderson
Since tcg_out_{ld,st}_helper_args, the slow path no longer requires the address argument to be set up by the tlb load sequence. Use a plain load for the addend and indexed addressing with the original input address register. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-11tcg/i386: Introduce prepare_host_addrRichard Henderson
Merge tcg_out_tlb_load, add_qemu_ldst_label, tcg_out_test_alignment, and some code that lived in both tcg_out_qemu_ld and tcg_out_qemu_st into one function that returns HostAddress and TCGLabelQemuLdst structures. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/i386: Introduce tcg_out_testiRichard Henderson
Split out a helper for choosing testb vs testl. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/i386: Drop r0+r1 local variables from tcg_out_tlb_loadRichard Henderson
Use TCG_REG_L[01] constants directly. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/i386: Introduce HostAddressRichard Henderson
Collect the 4 potential parts of the host address into a struct. Reorg tcg_out_qemu_{ld,st}_direct to use it. Reorg guest_base handling to use it. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/i386: Generalize multi-part load overlap testRichard Henderson
Test for both base and index; use datahi as a temporary, overwritten by the final load. Always perform the loads in ascending order, so that any (user-only) fault sees the correct address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-05tcg/i386: Rationalize args to tcg_out_qemu_{ld,st}Richard Henderson
Interpret the variable argument placement in the caller. Pass data_type instead of is64 -- there are several places where we already convert back from bool to type. Clean things up by using type throughout. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-05-02tcg: Introduce tcg_out_movext2Richard Henderson
This is common code in most qemu_{ld,st} slow paths, moving two registers when there may be overlap between sources and destinations. At present, this is only used by 32-bit hosts for 64-bit data, but will shortly be used for more than that. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Introduce tcg_out_xchgRichard Henderson
We will want a backend interface for register swapping. This is only properly defined for x86; all others get a stub version that always indicates failure. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Introduce tcg_out_movextRichard Henderson
This is common code in most qemu_{ld,st} slow paths, extending the input value for the store helper data argument or extending the return value from the load helper. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extrl_i64_i32Richard Henderson
We will need a backend interface for type truncation. For those backends that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_extu_i32_i64Richard Henderson
We will need a backend interface for type extension with zero. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_exts_i32_i64Richard Henderson
We will need a backend interface for type extension with sign. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32uRichard Henderson
We will need a backend interface for performing 32-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext32sRichard Henderson
We will need a backend interface for performing 32-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16uRichard Henderson
We will need a backend interface for performing 16-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext16sRichard Henderson
We will need a backend interface for performing 16-bit sign-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2023-04-23tcg: Split out tcg_out_ext8uRichard Henderson
We will need a backend interface for performing 8-bit zero-extend. Use it in tcg_reg_alloc_op in the meantime. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>