aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2021-09-01target/riscv: Use {get,dest}_gpr for RVARichard Henderson
Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210823195529.560295-20-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Reorg csr instructionsRichard Henderson
Introduce csrr and csrw helpers, for read-only and write-only insns. Note that we do not properly implement this in riscv_csrrw, in that we cannot distinguish true read-only (rs1 == 0) from any other zero write_mask another source register -- this should still raise an exception for read-only registers. Only issue gen_io_start for CF_USE_ICOUNT. Use ctx->zero for csrrc. Use get_gpr and dest_gpr. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210823195529.560295-19-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Fix hgeie, hgeipRichard Henderson
We failed to write into *val for these read functions; replace them with read_zero. Only warn about unsupported non-zero value when writing a non-zero value. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210823195529.560295-18-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Fix rmw_sip, rmw_vsip, rmw_hsip vs write-only operationRichard Henderson
We distinguish write-only by passing ret_value as NULL. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210823195529.560295-17-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use {get, dest}_gpr for integer load/storeRichard Henderson
Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-16-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use get_gpr in branchesRichard Henderson
Narrow the scope of t0 in trans_jalr. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-15-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use extracts for sraiw and srliwRichard Henderson
These operations can be done in one instruction on some hosts. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210823195529.560295-14-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use DisasExtend in shift operationsRichard Henderson
These operations are greatly simplified by ctx->w, which allows us to fold gen_shiftw into gen_shift. Split gen_shifti into gen_shift_imm_{fn,tl} like we do for gen_arith_imm_{fn,tl}. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-13-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Add DisasExtend to gen_unaryRichard Henderson
Use ctx->w for ctpopw, which is the only one that can re-use the generic algorithm for the narrow operation. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-12-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Move gen_* helpers for RVBRichard Henderson
Move these helpers near their use by the trans_* functions within insn_trans/trans_rvb.c.inc. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-11-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Move gen_* helpers for RVMRichard Henderson
Move these helpers near their use by the trans_* functions within insn_trans/trans_rvm.c.inc. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-10-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use gen_arith for mulh and mulhuRichard Henderson
Split out gen_mulh and gen_mulhu and use the common helper. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-9-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Remove gen_arith_div*Richard Henderson
Use ctx->w and the enhanced gen_arith function. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-8-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Add DisasExtend to gen_arith*Richard Henderson
Most arithmetic does not require extending the inputs. Exceptions include division, comparison and minmax. Begin using ctx->w, which allows elimination of gen_addw, gen_subw, gen_mulw. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-7-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Introduce DisasExtend and new helpersRichard Henderson
Introduce get_gpr, dest_gpr, temp_new -- new helpers that do not force tcg globals into temps, returning a constant 0 for $zero as source and a new temp for $zero as destination. Introduce ctx->w for simplifying word operations, such as addw. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-6-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Add DisasContext to gen_get_gpr, gen_set_gprRichard Henderson
We will require the context to handle RV64 word operations. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-5-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Clean up division helpersRichard Henderson
Utilize the condition in the movcond more; this allows some of the setcond that were feeding into movcond to be removed. Do not write into source1 and source2. Re-name "condN" to "tempN" and use the temporaries for more than holding conditions. Tested-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-4-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Use tcg_constant_*Richard Henderson
Replace uses of tcg_const_* with the allocate and free close together. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823195529.560295-2-richard.henderson@linaro.org Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Add User CSRs read-only checkLIU Zhiwei
For U-mode CSRs, read-only check is also needed. Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: 20210810014552.4884-1-zhiwei_liu@c-sky.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Don't wrongly override isa versionLIU Zhiwei
For some cpu, the isa version has already been set in cpu init function. Thus only override the isa version when isa version is not set, or users set different isa version explicitly by cpu parameters. Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Message-id: 20210811144612.68674-1-zhiwei_liu@c-sky.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01target/riscv: Correct a comment in riscv_csrrw()Bin Meng
When privilege check fails, RISCV_EXCP_ILLEGAL_INST is returned, not -1 (RISCV_EXCP_NONE). Signed-off-by: Bin Meng <bmeng.cn@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210807141025.31808-1-bmeng.cn@gmail.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-08-27Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.2-20210827' ↵Peter Maydell
into staging ppc patch queue 2021-08-27 First ppc pull request for qemu-6.2. As usual, there's a fair bit here, since it's been queued during the 6.1 freeze. Highlights are: * Some fixes for 128 bit arithmetic and some vector opcodes that use them * Significant improvements to the powernv to support POWER10 cpus (more to come though) * Several cleanups to the ppc softmmu code * A few other assorted fixes # gpg: Signature made Fri 27 Aug 2021 08:09:12 BST # gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full] # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full] # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full] # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown] # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dg-gitlab/tags/ppc-for-6.2-20210827: target/ppc: fix vector registers access in gdbstub for little-endian include/qemu/int128.h: introduce bswap128s target/ppc: fix vextu[bhw][lr]x helpers include/qemu/int128.h: define struct Int128 according to the host endianness ppc/xive: Export xive_presenter_notify() ppc/xive: Export PQ get/set routines ppc/pnv: add a chip topology index for POWER10 ppc/pnv: Distribute RAM among the chips ppc/pnv: Use a simple incrementing index for the chip-id ppc/pnv: powerpc_excp: Do not discard HDECR exception when entering power-saving mode ppc/pnv: Change the POWER10 machine to support DD2 only ppc: Add a POWER10 DD2 CPU ppc/pnv: update skiboot to commit 820d43c0a775. target/ppc: moved store_40x_sler to helper_regs.c target/ppc: moved ppc_store_sdr1 to mmu_common.c target/ppc: divided mmu_helper.c in 2 files spapr_pci: Fix leak in spapr_phb_vfio_get_loc_code() with g_autofree xive: Remove extra '0x' prefix in trace events Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-27Merge remote-tracking branch 'remotes/armbru/tags/pull-error-2021-08-26' ↵Peter Maydell
into staging Error reporting patches for 2021-08-26 # gpg: Signature made Thu 26 Aug 2021 16:17:05 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-error-2021-08-26: vl: Clean up -smp error handling Remove superfluous ERRP_GUARD() vhost: Clean up how VhostOpts method vhost_backend_init() fails vhost: Clean up how VhostOpts method vhost_get_config() fails microvm: Drop dead error handling in microvm_machine_state_init() migration: Handle migration_incoming_setup() errors consistently migration: Unify failure check for migrate_add_blocker() whpx nvmm: Drop useless migrate_del_blocker() vfio: Avoid error_propagate() after migrate_add_blocker() i386: Never free migration blocker objects instead of sometimes vhost-scsi: Plug memory leak on migrate_add_blocker() failure multi-process: Fix pci_proxy_dev_realize() error handling spapr: Explain purpose of ->fwnmi_migration_blocker more clearly spapr: Plug memory leak when we can't add a migration blocker error: Use error_fatal to simplify obvious fatal errors (again) Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-27target/ppc: fix vector registers access in gdbstub for little-endianMatheus Ferst
As vector registers are stored in host endianness, we shouldn't swap its 64-bit elements in user mode. Add a 16-byte case in ppc_maybe_bswap_register to handle the reordering of elements in softmmu and remove avr_need_swap which is now unused. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210826145656.2507213-3-matheus.ferst@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27target/ppc: fix vextu[bhw][lr]x helpersMatheus Ferst
These helpers shouldn't depend on the host endianness, as they only use shifts, ands, and int128_* methods. Fixes: 60caf2216bf0 ("target-ppc: add vextu[bhw][lr]x instructions") Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20210826141446.2488609-3-matheus.ferst@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27ppc/pnv: powerpc_excp: Do not discard HDECR exception when entering ↵Cédric Le Goater
power-saving mode The Hypervisor Decrementer exception should not be generated while the CPU is in power-saving mode (see cpu_ppc_hdecr_excp()). However, discarding the exception before entering the power-saving mode is wrong since we would loose a previously generated HDEC. Fixes: 4b236b621bf0 ("ppc: Initial HDEC support") Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20210809134547.689560-4-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27ppc: Add a POWER10 DD2 CPUCédric Le Goater
The POWER10 DD2 CPU adds an extra LPCR[HAIL] bit. DD1 doesn't have HAIL but since it does not break the modeling and that we don't plan to support DD1, modify the LPCR mask of all the POWER10 family. Setting the HAIL bit is a requirement to support the scv instruction on PowerNV POWER10 platforms since glibc-2.33. Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20210809134547.689560-2-clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27target/ppc: moved store_40x_sler to helper_regs.cLucas Mateus Castro (alqotel)
moved store_40x_sler from mmu_common.c to helper_regs.c as it is a function to store a value in a special purpose register, so moving it to a file focused in special register manipulation is more appropriate. Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210723175627.72847-4-lucas.araujo@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27target/ppc: moved ppc_store_sdr1 to mmu_common.cLucas Mateus Castro (alqotel)
ppc_store_sdr1 was at first in mmu_helper.c and was moved as part the patches to enable the disable-tcg option, now it's being moved back to a file that will be compiled with that option Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210723175627.72847-3-lucas.araujo@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-27target/ppc: divided mmu_helper.c in 2 filesLucas Mateus Castro (alqotel)
Divided mmu_helper.c in 2 files, functions inside #ifdef CONFIG_SOFTMMU stayed in mmu_helper.c, other functions moved to mmu_common.c. Updated meson.build to compile mmu_common.c and only compile mmu_helper.c when CONFIG_TCG is set. Moved function declarations, #define and structs used by both files to internal.h except for functions that use structures defined in cpu.h, those were moved to cpu.h. Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20210723175627.72847-2-lucas.araujo@eldorado.org.br> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-08-26target/arm: Do hflags rebuild in cpsr_write()Peter Maydell
Currently we rely on all the callsites of cpsr_write() to rebuild the cached hflags if they change one of the CPSR bits which we use as a TB flag and cache in hflags. This is a bit awkward when we want to change the set of CPSR bits that we cache, because it means we need to re-audit all the cpsr_write() callsites to see which flags they are writing and whether they now need to rebuild the hflags. Switch instead to making cpsr_write() call arm_rebuild_hflags() itself if one of the bits being changed is a cached bit. We don't do the rebuild for the CPSRWriteRaw write type, because that kind of write is generally doing something special anyway. For the CPSRWriteRaw callsites in the KVM code and inbound migration we definitely don't want to recalculate the hflags; the callsites in boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves anyway because of other CPU state changes they make. This allows us to drop explicit arm_rebuild_hflags() calls in a couple of places where the only reason we needed to call it was the CPSR write. This fixes a bug where we were incorrectly failing to rebuild hflags in the code path for a gdbstub write to CPSR, which meant that you could make QEMU assert by breaking into a running guest, altering the CPSR to change the value of, for example, CPSR.E, and then continuing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TJDBXPeter Maydell
In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
2021-08-26target/arm: Implement HSTR.TTEEPeter Maydell
In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses to the Thumb2EE TEECR and TEEHBR registers to be trapped to the hypervisor. Implement these traps. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
2021-08-26target/arm: Avoid assertion trying to use KVM and multiple ASesPeter Maydell
KVM cannot support multiple address spaces per CPU; if you try to create more than one then cpu_address_space_init() will assert. In the Arm CPU realize function, detect the configurations which would cause us to need more than one AS, and cleanly fail the realize rather than blundering on into the assertion. This turns this: $ qemu-system-aarch64 -enable-kvm -display none -cpu max -machine raspi3b qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine raspi3b qemu-system-aarch64: Cannot enable KVM when guest CPU has EL3 enabled and this: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: ../../softmmu/physmem.c:747: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. Aborted into: $ qemu-system-aarch64 -enable-kvm -display none -machine mps3-an524 qemu-system-aarch64: Cannot enable KVM when using an M-profile guest CPU Fixes: https://gitlab.com/qemu-project/qemu/-/issues/528 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210816135842.25302-3-peter.maydell@linaro.org
2021-08-26arch_init.h: Don't include arch_init.h unnecessarilyPeter Maydell
arch_init.h only defines the QEMU_ARCH_* enumeration and the arch_type global. Don't include it in files that don't use those. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20210730105947.28215-8-peter.maydell@linaro.org
2021-08-26target/arm/cpu64: Validate sve vector lengths are supportedAndrew Jones
Future CPU types may specify which vector lengths are supported. We can apply nearly the same logic to validate those lengths as we do for KVM's supported vector lengths. We merge the code where we can, but unfortunately can't completely merge it because KVM requires all vector lengths, power-of-two or not, smaller than the maximum enabled length to also be enabled. The architecture only requires all the power-of-two lengths, though, so TCG will only enforce that. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-5-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu64: Replace kvm_supported with sve_vq_supportedAndrew Jones
Now that we have an ARMCPU member sve_vq_supported we no longer need the local kvm_supported bitmap for KVM's supported vector lengths. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-4-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/kvm64: Ensure sve vls map is completely clearAndrew Jones
bitmap_clear() only clears the given range. While the given range should be sufficient in this case we might as well be 100% sure all bits are zeroed by using bitmap_zero(). Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-3-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26target/arm/cpu: Introduce sve_vq_supported bitmapAndrew Jones
Allow CPUs that support SVE to specify which SVE vector lengths they support by setting them in this bitmap. Currently only the 'max' and 'host' CPU types supports SVE and 'host' requires KVM which obtains its supported bitmap from the host. So, we only need to initialize the bitmap for 'max' with TCG. And, since 'max' should support all SVE vector lengths we simply fill the bitmap. Future CPU types may have less trivial maps though. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210823160647.34028-2-drjones@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-26migration: Unify failure check for migrate_add_blocker()Markus Armbruster
Most callers check the return value. Some check whether it set an error. Functionally equivalent, but the former tends to be easier on the eyes, so do that everywhere. Prior art: commit c6ecec43b2 "qemu-option: Check return value instead of @err where convenient". Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20210720125408.387910-10-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26whpx nvmm: Drop useless migrate_del_blocker()Markus Armbruster
There is nothing to delete after migrate_add_blocker() failed. Trying anyway is safe, but useless. Don't. Cc: Sunil Muthuswamy <sunilmut@microsoft.com> Cc: Kamil Rytarowski <kamil@netbsd.org> Cc: Reinoud Zandijk <reinoud@netbsd.org> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20210720125408.387910-9-armbru@redhat.com> Reviewed-by: Reinoud Zandijk <reinoud@NetBSD.org> Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26i386: Never free migration blocker objects instead of sometimesMarkus Armbruster
invtsc_mig_blocker has static storage duration. When a CPU with certain features is initialized, and invtsc_mig_blocker is still null, we add a migration blocker and store it in invtsc_mig_blocker. The object is freed when migrate_add_blocker() fails, leaving invtsc_mig_blocker dangling. It is not freed on later failures. Same for hv_passthrough_mig_blocker and hv_no_nonarch_cs_mig_blocker. All failures are actually fatal, so whether we free or not doesn't really matter, except as bad examples to be copied / imitated. Clean this up in a minimal way: never free these blocker objects. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20210720125408.387910-7-armbru@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com>
2021-08-26error: Use error_fatal to simplify obvious fatal errors (again)Markus Armbruster
We did this with scripts/coccinelle/use-error_fatal.cocci before, in commit 50beeb68094 and 007b06578ab. This commit cleans up rarer variations that don't seem worth matching with Coccinelle. Cc: Thomas Huth <thuth@redhat.com> Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Cc: Peter Xu <peterx@redhat.com> Cc: Juan Quintela <quintela@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Marc-André Lureau <marcandre.lureau@redhat.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20210720125408.387910-2-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2021-08-26Merge remote-tracking branch ↵Peter Maydell
'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging x86 queue, 2021-08-25 Bug fixes: * Remove split lock detect in Snowridge CPU model (Chenyi Qiang) * Remove AVX_VNNI feature from Cooperlake cpu model (Yang Zhong) # gpg: Signature made Wed 25 Aug 2021 20:53:59 BST # gpg: using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6 # gpg: issuer "ehabkost@redhat.com" # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full] # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost-gl/tags/x86-next-pull-request: i386/cpu: Remove AVX_VNNI feature from Cooperlake cpu model target/i386: Remove split lock detect in Snowridge CPU model Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25Merge remote-tracking branch 'remotes/philmd/tags/mips-20210825' into stagingPeter Maydell
MIPS patches queue - minor simplifications in PREF / JR opcodes - merge 32-bit/64-bit Release6 decodetree definitions - converted NEC Vr54xx extension opcodes to decodetree - housekeeping in gen_helper() macros - replace TARGET_WORDS_BIGENDIAN #ifdef'ry by cpu_is_bigendian() - allow Loongson 3A1000 to use up to 48-bit VAddr # gpg: Signature made Wed 25 Aug 2021 12:04:31 BST # gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE # gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full] # Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE * remotes/philmd/tags/mips-20210825: (28 commits) target/mips: Replace TARGET_WORDS_BIGENDIAN by cpu_is_bigendian() target/mips: Store CP0_Config0 in DisasContext target/mips: Replace GET_LMASK64() macro by get_lmask(64) function target/mips: Replace GET_LMASK() macro by get_lmask(32) function target/mips: Call cpu_is_bigendian & inline GET_OFFSET in ld/st helpers target/mips: Define gen_helper() macros in translate.h target/mips: Use tcg_constant_i32() in generate_exception_err() target/mips: Inline gen_helper_0e0i() target/mips: Inline gen_helper_1e1i() call in op_ld_INSN() macros target/mips: Simplify gen_helper() macros by using tcg_constant_i32() target/mips: Use tcg_constant_i32() in gen_helper_0e2i() target/mips: Remove gen_helper_1e2i() target/mips: Remove gen_helper_0e3i() target/mips: Remove duplicated check_cp1_enabled() calls in Loongson EXT target/mips: Allow Loongson 3A1000 to use up to 48-bit VAddr target/mips: Document Loongson-3A CPU definitions target/mips: Convert Vr54xx MSA* opcodes to decodetree target/mips: Convert Vr54xx MUL* opcodes to decodetree target/mips: Convert Vr54xx MACC* opcodes to decodetree target/mips: Introduce decodetree structure for NEC Vr54xx extension ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-08-25i386/cpu: Remove AVX_VNNI feature from Cooperlake cpu modelYang Zhong
The AVX_VNNI feature is not in Cooperlake platform, remove it from cpu model. Signed-off-by: Yang Zhong <yang.zhong@intel.com> Message-Id: <20210820054611.84303-1-yang.zhong@intel.com> Fixes: c1826ea6a052 ("i386/cpu: Expose AVX_VNNI instruction to guest") Cc: qemu-stable@nongnu.org Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2021-08-25target/i386: Remove split lock detect in Snowridge CPU modelChenyi Qiang
At present, there's no mechanism intelligent enough to virtualize split lock detection correctly. Remove it in Snowridge CPU model to avoid the feature exposure. Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> Message-Id: <20210630012053.10098-1-chenyi.qiang@intel.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2021-08-25target/mips: Replace TARGET_WORDS_BIGENDIAN by cpu_is_bigendian()Philippe Mathieu-Daudé
Add the inlined cpu_is_bigendian() function in "translate.h". Replace the TARGET_WORDS_BIGENDIAN #ifdef'ry by calls to cpu_is_bigendian(). Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210818164321.2474534-6-f4bug@amsat.org>
2021-08-25target/mips: Store CP0_Config0 in DisasContextPhilippe Mathieu-Daudé
Most TCG helpers only have access to a DisasContext pointer, not CPUMIPSState. Store a copy of CPUMIPSState::CP0_Config0 in DisasContext so we can access it from TCG helpers. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210818164321.2474534-5-f4bug@amsat.org>
2021-08-25target/mips: Replace GET_LMASK64() macro by get_lmask(64) functionPhilippe Mathieu-Daudé
The target endianess information is stored in the BigEndian bit of the Config0 register in CP0. Replace the GET_LMASK() macro by an inlined get_lmask() function, passing CPUMIPSState and the word size as argument. We can remove another use of the TARGET_WORDS_BIGENDIAN definition. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210818215517.2560994-4-f4bug@amsat.org>