aboutsummaryrefslogtreecommitdiff
path: root/target/i386/tcg
AgeCommit message (Collapse)Author
2022-10-18target/i386: add CPUID feature checks to new decoderPaolo Bonzini
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: add CPUID[EAX=7,ECX=0].ECX to DisasContextPaolo Bonzini
TCG will shortly implement VAES instructions, so add the relevant feature word to the DisasContext. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: add ALU load/writeback corePaolo Bonzini
Add generic code generation that takes care of preparing operands around calls to decode.e.gen in a table-driven manner, so that ALU operations need not take care of that. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: add core of new i386 decoderPaolo Bonzini
The new decoder is based on three principles: - use mostly table-driven decoding, using tables derived as much as possible from the Intel manual. Centralizing the decode the operands makes it more homogeneous, for example all immediates are signed. All modrm handling is in one function, and can be shared between SSE and ALU instructions (including XMM<->GPR instructions). The SSE/AVX decoder will also not have duplicated code between the 0F, 0F38 and 0F3A tables. - keep the code as "non-branchy" as possible. Generally, the code for the new decoder is more verbose, but the control flow is simpler. Conditionals are not nested and have small bodies. All instruction groups are resolved even before operands are decoded, and code generation is separated as much as possible within small functions that only handle one instruction each. - keep address generation and (for ALU operands) memory loads and writeback as much in common code as possible. All ALU operations for example are implemented as T0=f(T0,T1). For non-ALU instructions, read-modify-write memory operations are rare, but registers do not have TCGv equivalents: therefore, the common logic sets up pointer temporaries with the operands, while load and writeback are handled by gvec or by helpers. These principles make future code review and extensibility simpler, at the cost of having a relatively large amount of code in the form of this patch. Even EVEX should not be _too_ hard to implement (it's just a crazy large amount of possibilities). This patch introduces the main decoder flow, and integrates the old decoder with the new one. The old decoder takes care of parsing prefixes and then optionally drops to the new one. The changes to the old decoder are minimal and allow it to be replaced incrementally with the new one. There is a debugging mechanism through a "LIMIT" environment variable. In user-mode emulation, the variable is the number of instructions decoded by the new decoder before permanently switching to the old one. In system emulation, the variable is the highest opcode that is decoded by the new decoder (this is less friendly, but it's the best that can be done without requiring deterministic execution). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: make rex_w available even in 32-bit modePaolo Bonzini
REX.W can be used even in 32-bit mode by AVX instructions, where it is retroactively renamed to VEX.W. Make the field available even in 32-bit mode but keep the REX_W() macro as it was; this way, that the handling of dflag does not use it by mistake and the AVX code more clearly points at the special VEX behavior of the bit. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: make ldo/sto operations consistent with ldqPaolo Bonzini
ldq takes a pointer to the first byte to load the 64-bit word in; ldo takes a pointer to the first byte of the ZMMReg. Make them consistent, which will be useful in the new SSE decoder's load/writeback routines. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use probe_access_full for final stage2 translationRichard Henderson
Rather than recurse directly on mmu_translate, go through the same softmmu lookup that we did for the page table walk. This centralizes all knowledge of MMU_NESTED_IDX, with respect to setup of TranslationParams, to get_physical_address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-10-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use atomic operations for pte updatesRichard Henderson
Use probe_access_full in order to resolve to a host address, which then lets us use a host cmpxchg to update the pte. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/279 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-9-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Combine 5 sets of variables in mmu_translateRichard Henderson
We don't need one variable set per translation level, which requires copying into pte/pte_addr for huge pages. Standardize on pte/pte_addr for all levels. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-8-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use MMU_NESTED_IDX for vmload/vmsaveRichard Henderson
Use MMU_NESTED_IDX for each memory access, rather than just a single translation to physical. Adjust svm_save_seg and svm_load_seg to pass in mmu_idx. This removes the last use of get_hphys so remove it. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-7-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDXRichard Henderson
These new mmu indexes will be helpful for improving paging and code throughout the target. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-6-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Reorg GET_HPHYSRichard Henderson
Replace with PTE_HPHYS for the page table walk, and a direct call to mmu_translate for the final stage2 translation. Hoist the check for HF2_NPT_MASK out to get_physical_address, which avoids the recursive call when stage2 is disabled. We can now return all the way out to x86_cpu_tlb_fill before raising an exception, which means probe works. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-5-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Introduce structures for mmu_translateRichard Henderson
Create TranslateParams for inputs, TranslateResults for successful outputs, and TranslateFault for error outputs; return true on success. Move stage1 error paths from handle_mmu_fault to x86_cpu_tlb_fill; reorg the rest of handle_mmu_fault into get_physical_address. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-4-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Direct call get_hphys from mmu_translateRichard Henderson
Use a boolean to control the call to get_hphys instead of passing a null function pointer. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-3-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use MMUAccessType across excp_helper.cRichard Henderson
Replace int is_write1 and magic numbers with the proper MMUAccessType access_type and enumerators. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221002172956.265735-2-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Save and restore pc_save before tcg_remove_ops_afterRichard Henderson
Restore pc_save while undoing any state change that may have happened while decoding the instruction. Leave a TODO about removing all of that when the table-based decoder is complete. Cc: Paolo Bonzini <pbonzini@redhat.com> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221016222303.288551-1-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11linux-user: i386/signal: support XSAVE/XRSTOR for signal frame fpstatePaolo Bonzini
Add support for saving/restoring extended save states when signals are delivered. This allows using AVX, MPX or PKRU registers in signal handlers. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11x86: Implement MSR_CORE_THREAD_COUNT MSRAlexander Graf
Intel CPUs starting with Haswell-E implement a new MSR called MSR_CORE_THREAD_COUNT which exposes the number of threads and cores inside of a package. This MSR is used by XNU to populate internal data structures and not implementing it prevents virtual machines with more than 1 vCPU from booting if the emulated CPU generation is at least Haswell-E. This patch propagates the existing hvf logic from patch 027ac0cb516 ("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to TCG. Signed-off-by: Alexander Graf <agraf@csgraf.de> Message-Id: <20221004225643.65036-2-agraf@csgraf.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Enable TARGET_TB_PCRELRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-27-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Inline gen_jmp_imRichard Henderson
Expand this function at each of its callers. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-26-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Add cpu_eipRichard Henderson
Create a tcg global temp for this, and use it instead of explicit stores. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-25-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create eip_cur_tlRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-24-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Merge gen_jmp_tb and gen_goto_tb into gen_jmp_relRichard Henderson
These functions have only one caller, and the logic is more obvious this way. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-23-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Remove MemOp argument to gen_op_j*_ecxRichard Henderson
These functions are always passed aflag, so we might as well read it from DisasContext directly. While we're at it, use a common subroutine for these two functions. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-22-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use gen_jmp_rel for DISAS_TOO_MANYRichard Henderson
With gen_jmp_rel, we may chain between two translation blocks which may only be separated because of TB size limits. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-21-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use gen_jmp_rel for gen_jccRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-20-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use gen_jmp_rel for loop, repz, jecxz insnsRichard Henderson
With gen_jmp_rel, we may chain to the next tb instead of merely writing to eip and exiting. For repz, subtract cur_insn_len to restart the current insn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-19-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create gen_jmp_relRichard Henderson
Create a common helper for pc-relative branches. The jmp jb insn was missing a mask for CODE32. In all cases the CODE64 check was incorrectly placed, allowing PREFIX_DATA to truncate %rip to 16 bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-18-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use DISAS_TOO_MANY to exit after gen_io_startRichard Henderson
We can set is_jmp early, using only one if, and let that be overwritten by gen_rep*'s calls to gen_jmp_tb. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-17-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create eip_next_*Richard Henderson
Create helpers for loading the address of the next insn. Use tcg_constant_* in adjacent code where convenient. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-16-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Truncate values for lcall_real to i32Richard Henderson
Use i32 not int or tl for eip and cs arguments. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-15-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Introduce DISAS_JUMPRichard Henderson
Drop the unused dest argument to gen_jr(). Remove most of the calls to gen_jr, and use DISAS_JUMP. Remove some unused loads of eip for lcall and ljmp. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-14-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Remove cur_eip, next_eip arguments to gen_repz*Richard Henderson
All callers pass s->base.pc_next and s->pc, which we can just as well compute within the functions. Pull out common helpers and reduce the amount of code under macros. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-13-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create cur_insn_len, cur_insn_len_i32Richard Henderson
Create common routines for computing the length of the insn. Use tcg_constant_i32 in the new function, while we're at it. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-12-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: USe DISAS_EOB_ONLYRichard Henderson
Replace lone calls to gen_eob() with the new enumerator. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-11-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use DISAS_EOB_NEXTRichard Henderson
Replace sequences of gen_update_cc_op, gen_update_eip_next, and gen_eob with the new is_jmp enumerator. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-10-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Use DISAS_EOB* in gen_movl_seg_T0Richard Henderson
Set is_jmp properly in gen_movl_seg_T0, so that the callers need to nothing special. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-9-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Introduce DISAS_EOB*Richard Henderson
Add a few DISAS_TARGET_* aliases to reduce the number of calls to gen_eob() and gen_eob_inhibit_irq(). So far, only update i386_tr_translate_insn for exiting the block because of single-step or previous inhibit irq. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-8-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create gen_update_eip_nextRichard Henderson
Sync EIP before exiting a translation block. Replace all gen_jmp_im that use s->pc. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-7-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Create gen_update_eip_curRichard Henderson
Like gen_update_cc_op, sync EIP before doing something that could raise an exception. Replace all gen_jmp_im that use s->base.pc_next. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-6-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Remove cur_eip, next_eip arguments to gen_interruptRichard Henderson
All callers pass s->base.pc_next and s->pc, which we can just as well compute within the function. Adjust to use tcg_constant_i32 while we're at it. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-5-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Remove cur_eip argument to gen_exceptionRichard Henderson
All callers pass s->base.pc_next - s->cs_base, which we can just as well compute within the function. Note the special case of EXCP_VSYSCALL in which s->cs_base wasn't subtracted, but cs_base is always zero in 64-bit mode, when vsyscall is used. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-4-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Return bool from disas_insnRichard Henderson
Instead of returning the new pc, which is present in DisasContext, return true if an insn was translated. This is false when we detect a page crossing and must undo the insn under translation. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221001140935.465607-3-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-11target/i386: Remove pc_startRichard Henderson
The DisasContext member and the disas_insn local variable of the same name are identical to DisasContextBase.pc_next. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20221001140935.465607-2-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-04accel/tcg: Introduce tb_pc and log_pcRichard Henderson
The availability of tb->pc will shortly be conditional. Introduce accessor functions to minimize ifdefs. Pass around a known pc to places like tcg_gen_code, where the caller must already have the value. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-09-19target/i386: introduce insn_get_addrPaolo Bonzini
The "O" operand type in the Intel SDM needs to load an 8- to 64-bit unsigned value, while insn_get is limited to 32 bits. Extract the code out of disas_insn and into a separate function. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19target/i386: REPZ and REPNZ are mutually exclusivePaolo Bonzini
The later prefix wins if both are present, make it show in s->prefix too. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-19target/i386: fix INSERTQ implementationPaolo Bonzini
INSERTQ is defined to not modify any bits in the lower 64 bits of the destination, other than the ones being replaced with bits from the source operand. QEMU instead is using unshifted bits from the source for those bits. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-18target/i386: Raise #GP on unaligned m128 accesses when required.Paolo Bonzini
Many instructions which load/store 128-bit values are supposed to raise #GP when the memory operand isn't 16-byte aligned. This includes: - Instructions explicitly requiring memory alignment (Exceptions Type 1 in the "AVX and SSE Instruction Exception Specification" section of the SDM) - Legacy SSE instructions that load/store 128-bit values (Exceptions Types 2 and 4). This change sets MO_ALIGN_16 on 128-bit memory accesses that require 16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access hooks that simulate a #GP exception in qemu-user and qemu-system, respectively. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ricky Zhou <ricky@rzhou.org> Message-Id: <20220830034816.57091-2-ricky@rzhou.org> [Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-06target/i386: Make translator stop before the end of a pageIlya Leoshkevich
Right now translator stops right *after* the end of a page, which breaks reporting of fault locations when the last instruction of a multi-insn translation block crosses a page boundary. An implementation, like the one arm and s390x have, would require an i386 length disassembler, which is burdensome to maintain. Another alternative would be to single-step at the end of a guest page, but this may come with a performance impact. Fix by snapshotting disassembly state and restoring it after we figure out we crossed a page boundary. This includes rolling back cc_op updates and emitted ops. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1143 Message-Id: <20220817150506.592862-4-iii@linux.ibm.com> [rth: Simplify end-of-insn cross-page checks.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>