aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2022-10-26target/hexagon: Convert to tcg_ops restore_state_to_opcRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26target/cris: Convert to tcg_ops restore_state_to_opcRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26target/avr: Convert to tcg_ops restore_state_to_opcRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26target/arm: Convert to tcg_ops restore_state_to_opcRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26target/alpha: Convert to tcg_ops restore_state_to_opcRichard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26accel/tcg: Simplify page_get/alloc_target_dataRichard Henderson
Since the only user, Arm MTE, always requires allocation, merge the get and alloc functions to always produce a non-null result. Also assume that the user has already checked page validity. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-26accel/tcg: Make page_alloc_target_data allocation constantRichard Henderson
Use a constant target data allocation size for all pages. This will be necessary to reduce overhead of page tracking. Since TARGET_PAGE_DATA_SIZE is now required, we can use this to omit data tracking for targets that don't require it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2022-10-25Merge tag 'trivial-branch-for-7.2-pull-request' of ↵Stefan Hajnoczi
https://gitlab.com/laurent_vivier/qemu into staging Pull request # -----BEGIN PGP SIGNATURE----- # # iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmNXleQSHGxhdXJlbnRA # dml2aWVyLmV1AAoJEPMMOL0/L748TIsP/1gulTFpYAs3Kao6IZonsuCzrjQrJWqv # 5SD7cVb7isOWdOSNK3glE4dG54Q38PaS9GHaCvzIndjHxlWddCCUuwiw6p1Wdo70 # fjNfcCOEPoalQbkZvLejhs5n2rlfTvS5JUnLKVD9+ton7hjnTyKGDDYao5mYhtzv # Kn9NpCD3m+K3orzG2Jj7jR1UAumg4cW4YQEpT8ItDT4Y5UAxjL6TZQ6CE220DQDq # YwDrHEgDYr/UKlTbIC/JwlKOLr0sh+UB1VV8GZS6e6pU9u5WpDDHlQZpU8W2tLLg # cG5m8tLG2avFxRMUFrPNZ8Lx2xKO8wL1PtgAO9w7qFK+r0soZvv+Zh4ev/t5zGLf # ciliItqf97yPYNIc3su75jqdQHed7lmZc3m9LBHg8VXN6rAatt8vWUbG90sAZuTU # tWBZHvQmG0s2MK4UYqeQ59tc21v9T2+VCiiv/1vjgEUr8tBhXS562jrDt/bNEqKa # eRzT4h4ffbP6BJRnyakxkFkQ7nd2OdlLNKUAr9Tk6T2fYuarfEdbYx//0950agqD # AAtdQ/AJm6Pq1Px0/RuMKK5WsL818BoAkfr6n7qXleunytJ1W5hjW9EmFIPZWPTR # ce/lSFHA0+MCpg6C8zAa4iNBg/Pk0p3GRrTeWyHK1FjV+Gep1QtE/a1vk/qiPzTM # qZVfPxa8cXXe # =caiq # -----END PGP SIGNATURE----- # gpg: Signature made Tue 25 Oct 2022 03:53:08 EDT # gpg: using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C # gpg: issuer "laurent@vivier.eu" # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [full] # gpg: aka "Laurent Vivier <laurent@vivier.eu>" [full] # gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [full] # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C * tag 'trivial-branch-for-7.2-pull-request' of https://gitlab.com/laurent_vivier/qemu: accel/tcg/tcg-accel-ops-rr: fix trivial typo ui: remove useless typecasts treewide: Remove the unnecessary space before semicolon include/hw/scsi/scsi.h: Remove unused scsi_legacy_handle_cmdline() prototype vmstate-static-checker:remove this redundant return tests/qtest: vhost-user-test: Fix [-Werror=format-overflow=] build warning tests/qtest: migration-test: Fix [-Werror=format-overflow=] build warning Drop useless casts from g_malloc() & friends to pointer elf2dmp: free memory in failure hw/core: Tidy up unnecessary casting away of const .gitignore: add multiple items to .gitignore Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-24Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* target/i386: new decoder bugfix * target/i386: complete x86-v3 support for TCG # -----BEGIN PGP SIGNATURE----- # # iQFHBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNTlqQUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroOQNQf430MHbrtN9WKKiXv3684XxmcnoRqg # PHmaGg2SKp7UB+hI2FMYgCZWOl5s3cGTHtwX8byFCttmE4kI7HJR7IouW6znm57j # 7QVx2TJXIZgqSYcfYzfLu46yS6pNqJUA+mBv5In3Vqt4ZQT2szefVBg6BzmuF6lT # HXbu/llc3iVfW4SNLJOABXzKNbPacmmpmLjoporfwOHwHjv4iikuXNUOZ84FFL11 # 2tkdcff282q00IRgHm1lSyiRiqh+kAxzSDanMjOZbphBiE9gNJjLGoV5F2X63e1O # DQGg4wqBWP68O/r8Fj8tOUMCTW212DwWyv1+d/lQB+wwpJK+P4O14dCW # =Fd+y # -----END PGP SIGNATURE----- # gpg: Signature made Sat 22 Oct 2022 03:07:16 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: target/i386: implement FMA instructions target/i386: implement F16C instructions target/i386: introduce function to set rounding mode from FPCW or MXCSR bits target/i386: decode-new: avoid out-of-bounds access to xmm_regs[-1] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-24treewide: Remove the unnecessary space before semicolonBin Meng
%s/return ;/return; Signed-off-by: Bin Meng <bmeng@tinylab.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com> Message-Id: <20221024072802.457832-1-bmeng@tinylab.org> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-10-22Drop useless casts from g_malloc() & friends to pointerMarkus Armbruster
These memory allocation functions return void *, and casting to another pointer type is useless clutter. Drop these casts. If you really want another pointer type, consider g_new(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20220923120025.448759-3-armbru@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2022-10-22target/i386: implement FMA instructionsPaolo Bonzini
The only issue with FMA instructions is that there are _a lot_ of them (30 opcodes, each of which comes in up to 4 versions depending on VEX.W and VEX.L; a total of 96 possibilities). However, they can be implement with only 6 helpers, two for scalar operations and four for packed operations. (Scalar versions do not do any merging; they only affect the bottom 32 or 64 bits of the output operand. Therefore, there is no separate XMM and YMM of the scalar helpers). First, we can reduce the number of helpers to one third by passing four operands (one output and three inputs); the reordering of which operands go to the multiply and which go to the add is done in emit.c. Second, the different instructions also dispatch to the same softfloat function, so the flags for float32_muladd and float64_muladd are passed in the helper as int arguments, with a little extra complication to handle FMADDSUB and FMSUBADD. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20target/i386: implement F16C instructionsPaolo Bonzini
F16C only consists of two instructions, which are a bit peculiar nevertheless. First, they access only the low half of an YMM or XMM register for the packed-half operand; the exact size still depends on the VEX.L flag. This is similar to the existing avx_movx flag, but not exactly because avx_movx is hardcoded to affect operand 2. To this end I added a "ph" format name; it's possible to reuse this approach for the VPMOVSX and VPMOVZX instructions, though that would also require adding two more formats for the low-quarter and low-eighth of an operand. Second, VCVTPS2PH is somewhat weird because it *stores* the result of the instruction into memory rather than loading it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20target/i386: introduce function to set rounding mode from FPCW or MXCSR bitsPaolo Bonzini
VROUND, FSTCW and STMXCSR all have to perform the same conversion from x86 rounding modes to softfloat constants. Since the ISA is consistent on the meaning of the two-bit rounding modes, extract the common code into a wrapper for set_float_rounding_mode. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20target/i386: decode-new: avoid out-of-bounds access to xmm_regs[-1]Paolo Bonzini
If the destination is a memory register, op->n is -1. Going through tcg_gen_gvec_dup_imm path is both useless (the value has been stored by the gen_* function already) and wrong because of the out-of-bounds access. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-20target/arm: Enable TARGET_TB_PCRELRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Introduce gen_pc_plus_diff for aarch32Richard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Introduce gen_pc_plus_diff for aarch64Richard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Change gen_jmp* to work on displacementsRichard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Remove gen_exception_internal_insn pc argumentRichard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Since we always pass dc->pc_curr, fold the arithmetic to zero displacement. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Change gen_exception_insn* to work on displacementsRichard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Change gen_*set_pc_im to gen_*update_pcRichard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values by passing in pc difference. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Change gen_goto_tb to work on displacementsRichard Henderson
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Introduce curr_insn_lenRichard Henderson
A simple helper to retrieve the length of the current insn. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use bool consistently for get_phys_addr subroutinesRichard Henderson
The return type of the functions is already bool, but in a few instances we used an integer type with the return statement. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Split out get_phys_addr_twostageRichard Henderson
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use softmmu tlbs for page table walkingRichard Henderson
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and arm_ldq_ptw. Use probe_access_full to find the host address, and if so use a host load. If the probe fails, we've got our fault info already. On the off chance that page tables are not in RAM, continue to use the address_space_ld* functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Move be test for regime into S1TranslateResultRichard Henderson
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Plumb debug into S1TranslateRichard Henderson
Before using softmmu page tables for the ptw, plumb down a debug parameter so that we can query page table entries from gdbstub without modifying cpu state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Split out S1Translate typeRichard Henderson
Consolidate most of the inputs and outputs of S1_ptw_translate into a single structure. Plumb this through arm_ld*_ptw from the controlling get_phys_addr_* routine. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Restrict tlb flush from vttbr_write to vmid changeRichard Henderson
Compare only the VMID field when considering whether we need to flush. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idxRichard Henderson
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb. Flush the tlb when invalidating stage 1+2 translations. Re-use alle1_tlbmask() for other instances of EL1&0 + Stage2. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Add ARMMMUIdx_Phys_{S,NS}Richard Henderson
Not yet used, but add mmu indexes for 1-1 mapping to physical addresses. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use probe_access_full for BTIRichard Henderson
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. In is_guarded_page, use probe_access_full instead of just guessing that the tlb entry is still present. Also handles the FIXME about executing from device memory. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Use probe_access_full for MTERichard Henderson
The CPUTLBEntryFull structure now stores the original pte attributes, as well as the physical address. Therefore, we no longer need a separate bit in MemTxAttrs, nor do we need to walk the tree of memory regions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: Enable TARGET_PAGE_ENTRY_EXTRARichard Henderson
Copy attrs and shareability, into the TLB. This will eventually be used by S1_ptw_translate to report stage1 translation failures, and by do_ats_write to fill in PAR_EL1. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-20target/arm: update the cortex-a15 MIDR to latest revAlex Bennée
QEMU doesn't model micro-architectural details which includes most chip errata. The ARM_ERRATA_798181 work around in the Linux kernel (see erratum_a15_798181_init) currently detects QEMU's cortex-a15 as broken and triggers additional expensive TLB flushes as a result. Change the MIDR to report what the latest silicon would (r4p0). We explicitly set the IMPDEF revidr bits to 0 because we don't need to set anything other than the silicon revision to indicate these flushes are not needed. This cuts about 5s from my Debian kernel boot with the latest 6.0rc1 kernel (29s->24s). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Message-id: 20221010153225.506394-1-alex.bennee@linaro.org Cc: Arnd Bergmann <arnd@linaro.org> Cc: Anders Roxell <anders.roxell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Message-Id: <20220906172257.2776521-1-alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2022-10-18Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingStefan Hajnoczi
* configure: don't enable firmware for targets that are not built * configure: don't use strings(1) * scsi, target/i386: switch from device_legacy_reset() to device_cold_reset() * target/i386: AVX support for TCG * target/i386: fix SynIC SINT assertion failure on guest reset * target/i386: Use atomic operations for pte updates and other cleanups * tests/tcg: extend SSE tests to AVX * virtio-scsi: send "REPORTED LUNS CHANGED" sense data upon disk hotplug events # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmNOlOcUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroNuvwgAj/Z5pI9KU33XiWKFR3bZf2lHh21P # xmTzNtPmnP1WHDY1DNug/UB+BLg3c+carpTf5n3B8aKI4X3FfxGSJvYlXy4BONFD # XqYMH3OZB5GaR8Wza9trNYjDs/9hOZus/0R6Hqdl/T38PlMjf8mmayULJIGdcFcJ # WJvITVntbcCwwbpyJbRC5BNigG8ZXTNRoKBgtFVGz6Ox+n0YydwKX5qU5J7xRfCU # lW41LjZ0Fk5lonH16+xuS4WD5EyrNt8cMKCGsxnyxhI7nehe/OGnYr9l+xZJclrh # inQlSwJv0IpUJcrGCI4Xugwux4Z7ZXv3JQ37FzsdZcv/ZXpGonXMeXNJ9A== # =o6x7 # -----END PGP SIGNATURE----- # gpg: Signature made Tue 18 Oct 2022 07:58:31 EDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (53 commits) target/i386: remove old SSE decoder target/i386: move 3DNow to the new decoder tests/tcg: extend SSE tests to AVX target/i386: Enable AVX cpuid bits when using TCG target/i386: implement VLDMXCSR/VSTMXCSR target/i386: implement XSAVE and XRSTOR of AVX registers target/i386: reimplement 0x0f 0x28-0x2f, add AVX target/i386: reimplement 0x0f 0x10-0x17, add AVX target/i386: reimplement 0x0f 0xc2, 0xc4-0xc6, add AVX target/i386: reimplement 0x0f 0x38, add AVX target/i386: Use tcg gvec ops for pmovmskb target/i386: reimplement 0x0f 0x3a, add AVX target/i386: clarify (un)signedness of immediates from 0F3Ah opcodes target/i386: reimplement 0x0f 0xd0-0xd7, 0xe0-0xe7, 0xf0-0xf7, add AVX target/i386: reimplement 0x0f 0x70-0x77, add AVX target/i386: reimplement 0x0f 0x78-0x7f, add AVX target/i386: reimplement 0x0f 0x50-0x5f, add AVX target/i386: reimplement 0x0f 0xd8-0xdf, 0xe8-0xef, 0xf8-0xff, add AVX target/i386: reimplement 0x0f 0x60-0x6f, add AVX target/i386: Introduce 256-bit vector helpers ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-18Merge tag 'pull-ppc-20221017' of https://gitlab.com/danielhb/qemu into stagingStefan Hajnoczi
ppc patch queue for 2022-10-18: This queue contains improvements in the e500 and ppc4xx boards, changes in the maintainership of the project, a new QMP/HMP command and bug fixes: - Cedric is stepping back from qemu-ppc maintainership; - ppc4xx_sdram: QOMification and clean ups; - e500: add new types of flash and clean ups; - QMP/HMP: introduce dumpdtb command; - spapr_pci, booke doorbell interrupt and xvcmp* bit fixes; The 'dumpdtb' implementation is also making changes to RISC-V files that were acked by Alistair Francis and are being included in this queue. # -----BEGIN PGP SIGNATURE----- # # iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY02qEgAKCRA82cqW3gMx # ZIadAQCYY9f+NFrSJBm3z4JjUaP+GmbgEjibjZW05diyKwbqzQEAjE1KXFCcd40D # 3Brs2Dm4YruaJCwb68vswVQAYteXaQ8= # =hl94 # -----END PGP SIGNATURE----- # gpg: Signature made Mon 17 Oct 2022 15:16:34 EDT # gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164 # gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164 * tag 'pull-ppc-20221017' of https://gitlab.com/danielhb/qemu: (38 commits) hw/riscv: set machine->fdt in spike_board_init() hw/riscv: set machine->fdt in sifive_u_machine_init() hw/ppc: set machine->fdt in spapr machine hw/ppc: set machine->fdt in pnv_reset() hw/ppc: set machine->fdt in pegasos2_machine_reset() hw/ppc: set machine->fdt in xilinx_load_device_tree() hw/ppc: set machine->fdt in sam460ex_load_device_tree() hw/ppc: set machine->fdt in bamboo_load_device_tree() hw/nios2: set machine->fdt in nios2_load_dtb() qmp/hmp, device_tree.c: introduce dumpdtb hw/ppc/spapr_pci.c: Use device_cold_reset() rather than device_legacy_reset() target/ppc: Fix xvcmp* clearing FI bit hw/ppc/e500: Remove if statement which is now always true hw/ppc/mpc8544ds: Add platform bus hw/ppc/mpc8544ds: Rename wrongly named method hw/ppc/e500: Reduce usage of sysbus API docs/system/ppc/ppce500: Add heading for networking chapter hw/gpio/meson: Introduce dedicated config switch for hw/gpio/mpc8xxx hw/ppc/meson: Allow e500 boards to be enabled separately ppc440_uc.c: Remove unneeded parenthesis ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2022-10-18target/i386: remove old SSE decoderPaolo Bonzini
With all SSE (and AVX!) instructions now implemented in disas_insn_new, it's possible to remove gen_sse, as well as the helpers for instructions that now use gvec. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: move 3DNow to the new decoderPaolo Bonzini
This adds another kind of weirdness when you thought you had seen it all: an opcode byte that comes _after_ the address, not before. It's not worth adding a new X86_SPECIAL_* constant for it, but it's actually not unlike VCMP; so, forgive me for exploiting the similarity and just deciding to dispatch to the right gen_helper_* call in a single code generation function. In fact, the old decoder had a bug where s->rip_offset should have been set to 1 for 3DNow! instructions, and it's fixed now. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Enable AVX cpuid bits when using TCGPaul Brook
Include AVX, AVX2 and VAES in the guest cpuid features supported by TCG. Signed-off-by: Paul Brook <paul@nowt.org> Message-Id: <20220424220204.2493824-40-paul@nowt.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: implement VLDMXCSR/VSTMXCSRPaolo Bonzini
These are exactly the same as the non-VEX version, but one has to be careful that only VEX.L=0 is allowed. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: implement XSAVE and XRSTOR of AVX registersPaolo Bonzini
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: reimplement 0x0f 0x28-0x2f, add AVXPaolo Bonzini
Here the code is a bit uglier due to the truncation and extension of registers to and from 32-bit. There is also a mistake in the manual with respect to the size of the memory operand of CVTPS2PI and CVTTPS2PI, reported by Ricky Zhou. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: reimplement 0x0f 0x10-0x17, add AVXPaolo Bonzini
These are mostly moves, and yet are a total pain. The main issue is that: 1) some instructions are selected by mod==11 (register operand) vs. mod=00/01/10 (memory operand) 2) stores to memory are two-operand operations, while the 3-register and load-from-memory versions operate on the entire contents of the destination; this makes it easier to separate the gen_* function for the store case 3) it's inefficient to load into xmm_T0 only to move the value out again, so the gen_* function for the load case is separated too The manual also has various mistakes in the operands here, for example the store case of MOVHPS operates on a 128-bit source (albeit discarding the bottom 64 bits) and therefore should be Mq,Vdq rather than Mq,Vq. Likewise for the destination and source of MOVHLPS. VUNPCK?PS and VUNPCK?PD are the same as VUNPCK?DQ and VUNPCK?QDQ, but encoded as prefixes rather than separate operands. The helpers can be reused however. For MOVSLDUP, MOVSHDUP and MOVDDUP I chose to reimplement them as helpers. I named the helper for MOVDDUP "movdldup" in preparation for possible future introduction of MOVDHDUP and to clarify the similarity with MOVSLDUP. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: reimplement 0x0f 0xc2, 0xc4-0xc6, add AVXPaolo Bonzini
Nothing special going on here, for once. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: reimplement 0x0f 0x38, add AVXPaolo Bonzini
There are several special cases here: 1) extending moves have different widths for the helpers vs. for the memory loads, and the width for memory loads depends on VEX.L too. This is represented by X86_SPECIAL_AVXExtMov. 2) some instructions, such as variable-width shifts, select the vector element size via REX.W. 3) VSIB instructions (VGATHERxPy, VPGATHERxy) are also part of this group, and they have (among other things) two output operands. 3) the macros for 4-operand blends (which are under 0x0f 0x3a) have to be extended to support 2-operand blends. The 2-operand variant actually came a few years earlier, but it is clearer to implement them in the opposite order. X86_TYPE_WM, introduced earlier for unaligned loads, is reused for helpers that accept a Reg* but have a M argument. These three-byte opcodes also include AVX new instructions, for which the helpers were originally implemented by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: Use tcg gvec ops for pmovmskbRichard Henderson
As pmovmskb is used by strlen et al, this is the third highest overhead sse operation at %0.8. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [Reorganize to generate code for any vector size. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-10-18target/i386: reimplement 0x0f 0x3a, add AVXPaolo Bonzini
The more complicated operations here are insertions and extractions. Otherwise, there are just more entries than usual because the PS/PD/SS/SD variations are encoded in the opcode rater than in the prefixes. These three-byte opcodes also include AVX new instructions, whose implementation in the helpers was originally done by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>