aboutsummaryrefslogtreecommitdiff
path: root/tcg
AgeCommit message (Collapse)Author
2020-11-04tcg: Revert "tcg/optimize: Flush data at labels not TCG_OPF_BB_END"Richard Henderson
This reverts commit cd0372c515c4732d8bd3777cdd995c139c7ed7ea. The patch is incorrect in that it retains copies between globals and non-local temps, and non-local temps still die at the end of the BB. Failing test case for hppa: .globl _start _start: cmpiclr,= 0x24,%r19,%r0 cmpiclr,<> 0x2f,%r19,%r19 ---- 00010057 0001005b movi_i32 tmp0,$0x24 sub_i32 tmp1,tmp0,r19 mov_i32 tmp2,tmp0 mov_i32 tmp3,r19 movi_i32 tmp1,$0x0 ---- 0001005b 0001005f brcond_i32 tmp2,tmp3,eq,$L1 movi_i32 tmp0,$0x2f sub_i32 tmp1,tmp0,r19 mov_i32 tmp2,tmp0 mov_i32 tmp3,r19 movi_i32 tmp1,$0x0 mov_i32 r19,tmp1 setcond_i32 psw_n,tmp2,tmp3,ne set_label $L1 In this case, both copies of "mov_i32 tmp3,r19" are removed. The second because opt thought it was redundant. The first is removed later by liveness because tmp3 is known to be dead. This leaves the setcond_i32 with an uninitialized input. Revert the entire patch for 5.2, and a proper optimization across the branch may be considered for the next development cycle. Reported-by: qemu@igor2.repo.hu Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-11-04tcg: Remove assert from set_jmp_reset_offsetRichard Henderson
Since 6e6c4efed99, there has been a more appropriate range check done later at the end of tcg_gen_code. There, a failing range check results in a returned error code, which causes the TB to be restarted at half the size. Reported-by: Sai Pavan Boddu <saipava@xilinx.com> Tested-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-27tcg/optimize: Flush data at labels not TCG_OPF_BB_ENDRichard Henderson
We can easily propagate temp values through the entire extended basic block (in this case, the set of blocks connected by fallthru), simply by not discarding the register state at the branch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-27tcg: Do not kill globals at conditional branchesRichard Henderson
We can easily register allocate the entire extended basic block (in this case, the set of blocks connected by fallthru), simply by not discarding the register state at the branch. This does not help blocks starting with a label, as they are reached via a taken branch, and that would require saving the complete register state at the branch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Remove TCG_TARGET_HAS_cmp_vecRichard Henderson
The cmp_vec opcode is mandatory; this symbol is unused. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg/optimize: Fold dup2_vecRichard Henderson
When the two arguments are identical, this can be reduced to dup_vec or to mov_vec from a tcg_constant_vec. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Fix generation of dupi_vec for 32-bit hostRichard Henderson
The definition of INDEX_op_dupi_vec is that it operates on units of tcg_target_ulong -- in this case 32 bits. It does not work to use this for a uint64_t value that happens to be small enough to fit in tcg_target_ulong. Fixes: d2fd745fe8b Fixes: db432672dc5 Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg/i386: Fix dupi for avx2 32-bit hostsRichard Henderson
The previous change wrongly stated that 32-bit avx2 should have used VPBROADCASTW. But that's a 16-bit broadcast and we want a 32-bit broadcast. Fixes: 7b60ef3264e Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Move some TCG_CT_* bits to TCGArgConstraint bitfieldsRichard Henderson
These are easier to set and test when they have their own fields. Reduce the size of alias_index and sort_index to 4 bits, which is sufficient for TCG_MAX_OP_ARGS. This leaves only the bits indicating constants within the ct field. Move all initialization to allocation time, rather than init individual fields in process_op_defs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Remove TCG_CT_REGRichard Henderson
This wasn't actually used for anything, really. All variable operands must accept registers, and which are indicated by the set in TCGArgConstraint.regs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Move sorted_args into TCGArgConstraint.sort_indexRichard Henderson
This uses an existing hole in the TCGArgConstraint structure and will be convenient for keeping the data in one place. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Drop union from TCGArgConstraintRichard Henderson
The union is unused; let "regs" appear in the main structure without the "u.regs" wrapping. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-08tcg: Adjust simd_desc size encodingRichard Henderson
With larger vector sizes, it turns out oprsz == maxsz, and we only need to represent mismatch for oprsz <= 32. We do, however, need to represent larger oprsz and do so without reducing SIMD_DATA_BITS. Reduce the size of the oprsz field and increase the maxsz field. Steal the oprsz value of 24 to indicate equality with maxsz. Tested-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-10-03disas: Move host asm annotations to tb_gen_codeRichard Henderson
Instead of creating GStrings and passing them into log_disas, just print the annotations directly in tb_gen_code. Fix the annotations for the slow paths of the TB, after the part implementing the final guest instruction. Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-09-23qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi
clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
2020-09-03tcg: Implement 256-bit dup for tcg_gen_gvec_dup_memRichard Henderson
We already support duplication of 128-bit blocks. This extends that support to 256-bit blocks. This will be needed by SVE2. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-09-03tcg: Eliminate one store for in-place 128-bit dup_memRichard Henderson
Do not store back to the exact memory from which we just loaded. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-09-03tcg: Fix tcg gen for vectorized absolute valueStephen Long
The fallback inline expansion for vectorized absolute value, when the host doesn't support such an insn was flawed. E.g. when a vector of bytes has all elements negative, mask will be 0xffff_ffff_ffff_ffff. Subtracting mask only adds 1 to the low element instead of all elements becase -mask is 1 and not 0x0101_0101_0101_0101. Signed-off-by: Stephen Long <steplong@quicinc.com> Message-Id: <20200813161818.190-1-steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-08-24Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.2-20200818' ↵Peter Maydell
into staging ppc patch queue 2020-08-18 Here's my first pull request for qemu-5.2, which has quite a few accumulated things. Highlights are: * Preliminary support for POWER10 (Power ISA 3.1) instruction emulation * Add documentation on the (very confusing) pseries NUMA configuration * Fix some bugs handling edge cases with XICS, XIVE and kernel_irqchip * Fix icount for a number of POWER registers * Many cleanups to error handling in XIVE code * Validate size of -prom-env data # gpg: Signature made Tue 18 Aug 2020 05:18:36 BST # gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full] # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full] # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full] # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown] # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-5.2-20200818: (40 commits) spapr/xive: Use xive_source_esb_len() nvram: Exit QEMU if NVRAM cannot contain all -prom-env data spapr/xive: Simplify error handling of kvmppc_xive_cpu_synchronize_state() ppc/xive: Simplify error handling in xive_tctx_realize() spapr/xive: Simplify error handling in kvmppc_xive_connect() ppc/xive: Fix error handling in vmstate_xive_tctx_*() callbacks spapr/xive: Fix error handling in kvmppc_xive_post_load() spapr/kvm: Fix error handling in kvmppc_xive_pre_save() spapr/xive: Rework error handling of kvmppc_xive_set_source_config() spapr/xive: Rework error handling in kvmppc_xive_get_queues() spapr/xive: Rework error handling of kvmppc_xive_[gs]et_queue_config() spapr/xive: Rework error handling of kvmppc_xive_cpu_[gs]et_state() spapr/xive: Rework error handling of kvmppc_xive_mmap() spapr/xive: Rework error handling of kvmppc_xive_source_reset() spapr/xive: Rework error handling of kvmppc_xive_cpu_connect() spapr: Simplify error handling in spapr_phb_realize() spapr/xive: Convert KVM device fd checks to assert() ppc/xive: Introduce dedicated kvm_irqchip_in_kernel() wrappers ppc/xive: Rework setup of XiveSource::esb_mmio target/ppc: Integrate icount to purr, vtb, and tbu40 ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-08-21meson: rename included C source files to .c.incPaolo Bonzini
With Makefiles that have automatically generated dependencies, you generated includes are set as dependencies of the Makefile, so that they are built before everything else and they are available when first building the .c files. Alternatively you can use a fine-grained dependency, e.g. target/arm/translate.o: target/arm/decode-neon-shared.inc.c With Meson you have only one choice and it is a third option, namely "build at the beginning of the corresponding target"; the way you express it is to list the includes in the sources of that target. The problem is that Meson decides if something is a source vs. a generated include by looking at the extension: '.c', '.cc', '.m', '.C' are sources, while everything else is considered an include---including '.inc.c'. Use '.c.inc' to avoid this, as it is consistent with our other convention of using '.rst.inc' for included reStructuredText files. The editorconfig file is adjusted. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-12target/ppc: add vmulld to INDEX_op_mul_vec caseLijun Pan
Group vmuluwm and vmulld. Make vmulld-specific changes since it belongs to new ISA 3.1. Signed-off-by: Lijun Pan <ljp@linux.ibm.com> Message-Id: <20200724045845.89976-3-ljp@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-07-16tcg: Save/restore vecop_list around minmax fallbackRichard Henderson
Forgetting this asserts when tcg_gen_cmp_vec is called from within tcg_gen_cmpsel_vec. Fixes: 72b4c792c7a Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-07-13tcg/riscv: Remove superfluous breaksLiao Pingfang
Remove superfluous breaks, as there is a "return" before them. Signed-off-by: Liao Pingfang <liao.pingfang@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <1594600421-22942-1-git-send-email-wang.yi59@zte.com.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2020-07-06tcg: Fix do_nonatomic_op_* vs signed operationsRichard Henderson
The smin/smax/umin/umax operations require the operands to be properly sign extended. Do not drop the MO_SIGN bit from the load, and additionally extend the val input. Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reported-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200701165646.1901320-1-richard.henderson@linaro.org>
2020-07-06tcg/ppc: Sanitize immediate shiftsCatherine A. Frederick
Sanitize shift constants so that shift operations with large constants don't generate invalid instructions. Signed-off-by: Catherine A. Frederick <chocola@animebitch.es> Message-Id: <20200607211100.22858-1-agrecascino123@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-16tcg: call qemu_spin_destroy for tb->jmp_lockEmilio G. Cota
Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [RF: minor changes + remove tb_destroy_func] Message-Id: <20200609200738.445-7-robert.foley@linaro.org> Message-Id: <20200612190237.30436-10-alex.bennee@linaro.org>
2020-06-02tcg: Improve move ops in liveness_pass_2Richard Henderson
If the output of the move is dead, then the last use is in the store. If we propagate the input to the store, then we can remove the move opcode entirely. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg/ppc: Implement INDEX_op_rot[lr]v_vecRichard Henderson
We already had support for rotlv, using a target-specific opcode; convert to use the generic opcode. Handle rotrv via simple negation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg/aarch64: Implement INDEX_op_rotl{i,v}_vecRichard Henderson
For immediate rotate , we can implement this in two instructions, using SLI. For variable rotate, the oddness of aarch64 right-shift- as-negative-left-shift means a backend-specific expansion works best. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg/i386: Implement INDEX_op_rotl{i,s,v}_vecRichard Henderson
For immediates, we must continue the special casing of 8-bit elements. The other element sizes and shift types are trivially implemented with shifts. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg: Implement gvec support for rotate by scalarRichard Henderson
No host backend support yet, but the interfaces for rotls are in place. Only implement left-rotate for now, as the only known use of vector rotate by scalar is s390x, so any right-rotate would be unused and untestable. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg: Remove expansion to shift by vector from do_shiftsRichard Henderson
We do not reflect this expansion in tcg_can_emit_vecop_list, so it is unused and unusable. However, we actually perform the same expansion in do_gvec_shifts, so it is also unneeded. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-06-02tcg: Implement gvec support for rotate by vectorRichard Henderson
No host backend support yet, but the interfaces for rotlv and rotrv are in place. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- v3: Drop the generic expansion from rot to shift; we can do better for each backend, and then this code becomes unused.
2020-06-02tcg: Implement gvec support for rotate by immediateRichard Henderson
No host backend support yet, but the interfaces for rotli are in place. Canonicalize immediate rotate to the left, based on a survey of architectures, but provide both left and right shift interfaces to the translators. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-15disas: include an optional note for the start of disassemblyAlex Bennée
This will become useful shortly for providing more information about output assembly inline. While there fix up the indenting and code formatting in disas(). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200513175134.19619-9-alex.bennee@linaro.org>
2020-05-06tcg: Fix integral argument type to tcg_gen_rot[rl]i_i{32,64}Richard Henderson
For the benefit of compatibility of function pointer types, we have standardized on int32_t and int64_t as the integral argument to tcg expanders. We converted most of them in 474b2e8f0f7, but missed the rotates. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-06tcg: Add load_dest parameter to GVecGen2Richard Henderson
We have this same parameter for GVecGen2i, GVecGen3, and GVecGen3i. This will make some SVE2 insns easier to parameterize. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-06tcg: Improve vector tail clearingRichard Henderson
Better handling of non-power-of-2 tails as seen with Arm 8-byte vector operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-06tcg: Remove tcg_gen_gvec_dup{8,16,32,64}iRichard Henderson
These interfaces are now unused. Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-06tcg: Use tcg_gen_gvec_dup_imm in logical simplificationsRichard Henderson
Replace the outgoing interface. Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-05-06tcg: Add tcg_gen_gvec_dup_immRichard Henderson
Add a version of tcg_gen_dup_* that takes both immediate and a vector element size operand. This will replace the set of tcg_gen_gvec_dup{8,16,32,64}i functions that encode the element size within the function name. Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-04-12tcg/mips: mips sync* encode errorlixinyu
OPC_SYNC_WMB, OPC_SYNC_MB, OPC_SYNC_ACQUIRE, OPC_SYNC_RELEASE and OPC_SYNC_RMB have wrong encode. According to the mips manual, their encode should be 'OPC_SYNC | 0x?? << 6' rather than 'OPC_SYNC | 0x?? << 5'. Wrong encode can lead illegal instruction errors. These instructions often appear with multi-threaded simulation. Fixes: 6f0b99104a3 ("tcg/mips: Add support for fence") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: lixinyu <precinct@mail.ustc.edu.cn> Message-Id: <20200411124612.12560-1-precinct@mail.ustc.edu.cn> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-04-07tcg/i386: Fix %r12 guest_base initializationRichard Henderson
When %gs cannot be used, we use register offset addressing. This path is almost never used, so it was clearly not tested. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200406174803.8192-1-richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2020-03-30tcg/i386: Fix INDEX_op_dup2_vecRichard Henderson
We were only constructing the 64-bit element, and not replicating the 64-bit element across the rest of the vector. Cc: qemu-stable@nongnu.org Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-03-17tcg/i386: Bound shift count expanding sari_vecRichard Henderson
A given RISU testcase for SVE can produce tcg-op-vec.c:511: do_shifti: Assertion `i >= 0 && i < (8 << vece)' failed. because expand_vec_sari gave a shift count of 32 to a MO_32 vector shift. In 44f1441dbe1, we changed from direct expansion of vector opcodes to re-use of the tcg expanders. So while the comment correctly notes that the hw will handle such a shift count, we now have to take our own sanity checks into account. Which is easy in this particular case. Fixes: 44f1441dbe1 Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-02-28tcg/arm: Expand epilogue inlineRichard Henderson
It is, after all, just two instructions. Profiling on a cortex-a15, using -d nochain to increase the number of exit_tb that are executed, shows a minor improvement of 0.5%. Signed-off-by: Richard Henderson <rth@twiddle.net>
2020-02-28tcg/arm: Split out tcg_out_epilogueRichard Henderson
We will shortly use this function from tcg_out_op as well. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2020-02-25tcg: save vaddr temp for plugin usageAlex Bennée
While do_gen_mem_cb does copy (via extu_tl_i64) vaddr into a new temp this won't help if the vaddr temp gets clobbered by the actual load/store op. To avoid this clobbering we explicitly copy vaddr before the op to ensure it is live my the time we do the instrumentation. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Cc: qemu-stable@nongnu.org Message-Id: <20200225124710.14152-18-alex.bennee@linaro.org>
2020-02-12tcg: Add tcg_gen_gvec_5_ptrRichard Henderson
Extend the vector generator infrastructure to handle 5 vector arguments. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Taylor Simpson <tsimpson@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-01-15tcg: Move TCG headers to include/tcg/Philippe Mathieu-Daudé
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200101112303.20724-4-philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>