aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2024-07-18target/riscv: Start counters from both mhpmcounter and mcountinhibitRajnesh Kanwal
Currently we start timer counter from write_mhpmcounter path only without checking for mcountinhibit bit. This changes adds mcountinhibit check and also programs the counter from write_mcountinhibit as well. When a counter is stopped using mcountinhibit we simply update the value of the counter based on current host ticks and save it for future reads. We don't need to disable running timer as pmu_timer_trigger_irq will discard the interrupt if the counter has been inhibited. Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20240711-smcntrpmf_v7-v8-10-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Enforce WARL behavior for scounteren/hcounterenAtish Patra
scounteren/hcountern are also WARL registers similar to mcountern. Only set the bits for the available counters during the write to preserve the WARL behavior. Signed-off-by: Atish Patra <atishp@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240711-smcntrpmf_v7-v8-9-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Save counter values during countinhibit updateAtish Patra
Currently, if a counter monitoring cycle/instret is stopped via mcountinhibit we just update the state while the value is saved during the next read. This is not accurate as the read may happen many cycles after the counter is stopped. Ideally, the read should return the value saved when the counter is stopped. Thus, save the value of the counter during the inhibit update operation and return that value during the read if corresponding bit in mcountihibit is set. Acked-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-ID: <20240711-smcntrpmf_v7-v8-8-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Implement privilege mode filtering for cycle/instretAtish Patra
Privilege mode filtering can also be emulated for cycle/instret by tracking host_ticks/icount during each privilege mode switch. This patch implements that for both cycle/instret and mhpmcounters. The first one requires Smcntrpmf while the other one requires Sscofpmf to be enabled. The cycle/instret are still computed using host ticks when icount is not enabled. Otherwise, they are computed using raw icount which is more accurate in icount mode. Co-Developed-by: Rajnesh Kanwal <rkanwal@rivosinc.com> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-ID: <20240711-smcntrpmf_v7-v8-7-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Only set INH fields if priv mode is availableAtish Patra
Currently, the INH fields are set in mhpmevent uncoditionally without checking if a particular priv mode is supported or not. Suggested-by: Alistair Francis <alistair23@gmail.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240711-smcntrpmf_v7-v8-6-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add cycle & instret privilege mode filtering supportKaiwen Xue
QEMU only calculates dummy cycles and instructions, so there is no actual means to stop the icount in QEMU. Hence this patch merely adds the functionality of accessing the cfg registers, and cause no actual effects on the counting of cycle and instret counters. Signed-off-by: Atish Patra <atishp@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240711-smcntrpmf_v7-v8-5-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add cycle & instret privilege mode filtering definitionsKaiwen Xue
This adds the definitions for ISA extension smcntrpmf. Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-ID: <20240711-smcntrpmf_v7-v8-4-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add cycle & instret privilege mode filtering propertiesKaiwen Xue
This adds the properties for ISA extension smcntrpmf. Patches implementing it will follow. Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240711-smcntrpmf_v7-v8-3-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Fix the predicate functions for mhpmeventhX CSRsAtish Patra
mhpmeventhX CSRs are available for RV32. The predicate function should check that first before checking sscofpmf extension. Fixes: 14664483457b ("target/riscv: Add sscofpmf extension support") Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-ID: <20240711-smcntrpmf_v7-v8-2-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Combine set_mode and set_virt functions.Rajnesh Kanwal
Combining riscv_cpu_set_virt_enabled() and riscv_cpu_set_mode() functions. This is to make complete mode change information available through a single function. This allows to easily differentiate between HS->VS, VS->HS and VS->VS transitions when executing state update codes. For example: One use-case which inspired this change is to update mode-specific instruction and cycle counters which requires information of both prev mode and current mode. Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20240711-smcntrpmf_v7-v8-1-b7c38ae7b263@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv/kvm: update KVM regs to Linux 6.10-rc5Daniel Henrique Barboza
Two new regs added: ztso and zacas. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709085431.455541-1-dbarboza@ventanamicro.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Validate the mode in write_vstvecJiayi Li
Base on the riscv-privileged spec, vstvec substitutes for the usual stvec. Therefore, the encoding of the MODE should also be restricted to 0 and 1. Signed-off-by: Jiayi Li <lijiayi@eswincomputing.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-ID: <20240701022553.1982-1-lijiayi@eswincomputing.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Expose zabha extension as a cpu propertyLIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-11-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add amocas.[b|h] for ZabhaLIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-10-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Move gen_cmpxchg before adding amocas.[b|h]LIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-9-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add AMO instructions for ZabhaLIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-8-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Move gen_amo before implement ZabhaLIU Zhiwei
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-7-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Support Zama16b extensionLIU Zhiwei
Zama16b is the property that misaligned load/stores/atomics within a naturally aligned 16-byte region are atomic. According to the specification, Zama16b applies only to AMOs, loads and stores defined in the base ISAs, and loads and stores of no more than XLEN bits defined in the F, D, and Q extensions. Thus it should not apply to zacas or RVC instructions. For an instruction in that set, if all accessed bytes lie within 16B granule, the instruction will not raise an exception for reasons of address alignment, and the instruction will give rise to only one memory operation for the purposes of RVWMO—i.e., it will execute atomically. Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20240709113652.1239-6-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add zcmop extensionLIU Zhiwei
Zcmop defines eight 16-bit MOP instructions named C.MOP.n, where n is an odd integer between 1 and 15, inclusive. C.MOP.n is encoded in the reserved encoding space corresponding to C.LUI xn, 0. Unlike the MOPs defined in the Zimop extension, the C.MOP.n instructions are defined to not write any register. In current implementation, C.MOP.n only has an check function, without any other more behavior. Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Deepak Gupta <debug@rivosinc.com> Message-ID: <20240709113652.1239-4-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18target/riscv: Add zimop extensionLIU Zhiwei
Zimop extension defines an encoding space for 40 MOPs.The Zimop extension defines 32 MOP instructions named MOP.R.n, where n is an integer between 0 and 31, inclusive. The Zimop extension additionally defines 8 MOP instructions named MOP.RR.n, where n is an integer between 0 and 7. These 40 MOPs initially are defined to simply write zero to x[rd], but are designed to be redefined by later extensions to perform some other action. Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Deepak Gupta <debug@rivosinc.com> Message-ID: <20240709113652.1239-2-zhiwei_liu@linux.alibaba.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2024-07-18Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into stagingRichard Henderson
trivial patches for 2024-07-17 # -----BEGIN PGP SIGNATURE----- # # iQEzBAABCAAdFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmaXpakACgkQcBtPaxpp # Plnvvwf8DdybFjyhAVmiG6+6WhB5s0hJhZRiWzUY6ieMbgPzCUgWzfr/pJh6q44x # rw+aVfe2kf1ysycx3DjcJpucrC1rQD/qV6dB3IA1rxidBOZfCb8iZwoaB6yS9Epp # 4uXIdfje4zO6oCMN17MTXvuQIEUK3ZHN0EQOs7vsA2d8/pHqBqRoixjz9KnKHlpk # P6kyIXceZ4wLAtwFJqa/mBBRnpcSdaWuQpzpBsg1E3BXRXXfeuXJ8WmGp0kEOpzQ # k7+2sPpuah2z7D+jNFBW0+3ZYDvO9Z4pomQ4al4w+DHDyWBF49WnnSdDSDbWwxI5 # K0vUlsDVU8yTnIEgN8BL82F8eub5Ug== # =ZYHJ # -----END PGP SIGNATURE----- # gpg: Signature made Wed 17 Jul 2024 09:06:17 PM AEST # gpg: using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59 # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full] # gpg: aka "Michael Tokarev <mjt@debian.org>" [full] # gpg: aka "Michael Tokarev <mjt@corpit.ru>" [full] * tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu: meson: Update meson-buildoptions.sh backends/rng-random: Get rid of qemu_open_old() backends/iommufd: Get rid of qemu_open_old() backends/hostmem-epc: Get rid of qemu_open_old() hw/vfio/container: Get rid of qemu_open_old() hw/usb/u2f-passthru: Get rid of qemu_open_old() hw/usb/host-libusb: Get rid of qemu_open_old() hw/i386/sgx: Get rid of qemu_open_old() tests/avocado: Remove the non-working virtio_check_params test doc/net/l2tpv3: Update boolean fields' description to avoid short-form use target/hexagon/imported/mmvec: Fix superfluous trailing semicolon util/oslib-posix: Fix superfluous trailing semicolon hw/i386/x86: Fix superfluous trailing semicolon accel/kvm/kvm-all: Fix superfluous trailing semicolon README.rst: add the missing punctuations block/curl: rewrite http header parsing function Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-17target/hexagon/imported/mmvec: Fix superfluous trailing semicolonZhao Liu
Fix the superfluous trailing semicolon in target/hexagon/imported/mmvec/ ext.idef. Cc: Brian Cain <bcain@quicinc.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Brian Cain <bcain@quicinc.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2024-07-17Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingRichard Henderson
* target/i386/tcg: fixes for seg_helper.c * SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT, but also don't use it by default * scsi: honor bootindex again for legacy drives * hpet, utils, scsi, build, cpu: miscellaneous bugfixes # -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD # TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv # XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp # y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba # Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK # JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ== # =cZhV # -----END PGP SIGNATURE----- # gpg: Signature made Wed 17 Jul 2024 02:34:05 AM AEST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: target/i386/tcg: save current task state before loading new one target/i386/tcg: use X86Access for TSS access target/i386/tcg: check for correct busy state before switching to a new task target/i386/tcg: Compute MMU index once target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl target/i386/tcg: Reorg push/pop within seg_helper.c target/i386/tcg: use PUSHL/PUSHW for error code target/i386/tcg: Allow IRET from user mode to user mode with SMAP target/i386/tcg: Remove SEG_ADDL target/i386/tcg: fix POP to memory in long mode hpet: fix HPET_TN_SETVAL for high 32-bits of the comparator hpet: fix clamping of period docs: Update description of 'user=username' for '-run-with' qemu/timer: Add host ticks function for LoongArch scsi: fix regression and honor bootindex again for legacy drives hw/scsi/lsi53c895a: bump instruction limit in scripts processing to fix regression disas: Fix build against Capstone v6 cpu: Free queued CPU work Revert "qemu-char: do not operate on sources from finalize callbacks" i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-07-16accel/tcg: Make cpu_exec_interrupt hook mandatoryPeter Maydell
The TCGCPUOps::cpu_exec_interrupt hook is currently not mandatory; if it is left NULL then we treat it as if it had returned false. However since pretty much every architecture needs to handle interrupts, almost every target we have provides the hook. The one exception is Tricore, which doesn't currently implement the architectural interrupt handling. Add a "do nothing" implementation of cpu_exec_hook for Tricore, assert on startup that the CPU does provide the hook, and remove the runtime NULL check before calling it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240712113949.4146855-1-peter.maydell@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-16target/i386/tcg: save current task state before loading new onePaolo Bonzini
This is how the steps are ordered in the manual. EFLAGS.NT is overwritten after the fact in the saved image. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: use X86Access for TSS accessPaolo Bonzini
This takes care of probing the vaddr range in advance, and is also faster because it avoids repeated TLB lookups. It also matches the Intel manual better, as it says "Checks that the current (old) TSS, new TSS, and all segment descriptors used in the task switch are paged into system memory"; note however that it's not clear how the processor checks for segment descriptors, and this check is not included in the AMD manual. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: check for correct busy state before switching to a new taskPaolo Bonzini
This step is listed in the Intel manual: "Checks that the new task is available (call, jump, exception, or interrupt) or busy (IRET return)". The AMD manual lists the same operation under the "Preventing recursion" paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor checks the busy bit in the IRET case. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Compute MMU index oncePaolo Bonzini
Add the MMU index to the StackAccess struct, so that it can be cached or (in the next patch) computed from information that is not in CPUX86State. Co-developed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Introduce x86_mmu_index_{kernel_,}plRichard Henderson
Disconnect mmu index computation from the current pl as stored in env->hflags. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Reorg push/pop within seg_helper.cRichard Henderson
Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code. One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels. Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric. However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: use PUSHL/PUSHW for error codePaolo Bonzini
Do not pre-decrement esp, let the macros subtract the appropriate operand size. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Allow IRET from user mode to user mode with SMAPPaolo Bonzini
This fixes a bug wherein i386/tcg assumed an interrupt return using the IRET instruction was always returning from kernel mode to either kernel mode or user mode. This assumption is violated when IRET is used as a clever way to restore thread state, as for example in the dotnet runtime. There, IRET returns from user mode to user mode. This bug is that stack accesses from IRET and RETF, as well as accesses to the parameters in a call gate, are normal data accesses using the current CPL. This manifested itself as a page fault in the guest Linux kernel due to SMAP preventing the access. This bug appears to have been in QEMU since the beginning. Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Co-developed-by: Robert R. Henry <rrh.henry@gmail.com> Signed-off-by: Robert R. Henry <rrh.henry@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: Remove SEG_ADDLRichard Henderson
This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16target/i386/tcg: fix POP to memory in long modePaolo Bonzini
In long mode, POP to memory will write a full 64-bit value. However, the call to gen_writeback() in gen_POP will use MO_32 because the decoding table is incorrect. The bug was latent until commit aea49fbb01a ("target/i386: use gen_writeback() within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v now receives op->ot instead of the "ot" returned by gen_pop_T0. Analyzed-by: Clément Chigot <chigot@adacore.com> Fixes: 5e9e21bcc4d ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07) Tested-by: Clément Chigot <chigot@adacore.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INITMichael Roth
Currently if the 'legacy-vm-type' property of the sev-guest object is 'on', QEMU will attempt to use the newer KVM_SEV_INIT2 kernel interface in conjunction with the newer KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM KVM VM types. This can lead to measurement changes if, for instance, an SEV guest was created on a host that originally had an older kernel that didn't support KVM_SEV_INIT2, but is booted on the same host later on after the host kernel was upgraded. Instead, if legacy-vm-type is 'off', QEMU should fail if the KVM_SEV_INIT2 interface is not provided by the current host kernel. Modify the fallback handling accordingly. In the future, VMSA features and other flags might be added to QEMU which will require legacy-vm-type to be 'off' because they will rely on the newer KVM_SEV_INIT2 interface. It may be difficult to convey to users what values of legacy-vm-type are compatible with which features/options, so as part of this rework, switch legacy-vm-type to a tri-state OnOffAuto option. 'auto' in this case will automatically switch to using the newer KVM_SEV_INIT2, but only if it is required to make use of new VMSA features or other options only available via KVM_SEV_INIT2. Defining 'auto' in this way would avoid inadvertantly breaking compatibility with older kernels since it would only be used in cases where users opt into newer features that are only available via KVM_SEV_INIT2 and newer kernels, and provide better default behavior than the legacy-vm-type=off behavior that was previously in place, so make it the default for 9.1+ machine types. Cc: Daniel P. Berrangé <berrange@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> cc: kvm@vger.kernel.org Signed-off-by: Michael Roth <michael.roth@amd.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Link: https://lore.kernel.org/r/20240710041005.83720-1-michael.roth@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-12target/loongarch: Fix cpu_reset set wrong CSR_CRMDSong Gao
After cpu_reset, DATF in CSR_CRMD is 0, DATM is 0. See the manual[1] 6.4. [1]: https://github.com/loongson/LoongArch-Documentation/releases/download/2023.04.20/LoongArch-Vol1-v1.10-EN.pdf Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Bibo Mao <maobibo@loongson.cn> Message-Id: <20240705021839.1004374-2-gaosong@loongson.cn>
2024-07-12target/loongarch: Set CSR_PRCFG1 and CSR_PRCFG2 valuesSong Gao
We set the value of register CSR_PRCFG3, but left out CSR_PRCFG1 and CSR_PRCFG2. Set CSR_PRCFG1 and CSR_PRCFG2 according to the default values of the physical machine. Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Bibo Mao <maobibo@loongson.cn> Message-Id: <20240705021839.1004374-1-gaosong@loongson.cn>
2024-07-12target/loongarch: Remove avail_64 in trans_srai_w() and simplify itFeiyang Chen
Since srai.w is a valid instruction on la32, remove the avail_64 check and simplify trans_srai_w(). Fixes: c0c0461e3a06 ("target/loongarch: Add avail_64 to check la64-only instructions") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Feiyang Chen <chris.chenfeiyang@gmail.com> Message-Id: <20240628033357.50027-1-chris.chenfeiyang@gmail.com> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12target/loongarch/kvm: Add software breakpoint supportBibo Mao
With KVM virtualization, debug exception is injected to guest kernel rather than host for normal break intruction. Here hypercall instruction with special code is used for sw breakpoint usage, and detailed instruction comes from kvm kernel with user API KVM_REG_LOONGARCH_DEBUG_INST. Now only software breakpoint is supported, and it is allowed to insert/remove software breakpoint. We can debug guest kernel with gdb method after kernel is loaded, hardware breakpoint will be added in later. Signed-off-by: Bibo Mao <maobibo@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Tested-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240607035016.2975799-1-maobibo@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-11target/arm: Convert PMULL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SMULL, UMULL, SMLAL, UMLAL, SMLSL, UMLSL to decodetreeRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target: Set TCGCPUOps::cpu_exec_halt to target's has_work implementationPeter Maydell
Currently the TCGCPUOps::cpu_exec_halt method is optional, and if it is not set then the default is to call the CPUClass::has_work method (which has an identical function signature). We would like to make the cpu_exec_halt method mandatory so we can remove the runtime check and fallback handling. In preparation for that, make all the targets which don't need special handling in their cpu_exec_halt set it to their cpu_has_work implementation instead of leaving it unset. (This is every target except for arm and i386.) In the riscv case this requires us to make the function not be local to the source file it's defined in. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()Peter Maydell
In commit a96edb687e76 we set the cpu_exec_halt field of the TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the arm_v7m_tcg_ops struct unchanged. That isn't wrong, because for M-profile FEAT_WFxT doesn't exist and the default handling for "no cpu_exec_halt method" is correct, but it's perhaps a little confusing. We would also like to make setting the cpu_exec_halt method mandatory. Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use for A-profile. (On M-profile we never set up the wfxt timer so there is no change in behaviour here.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target/arm: Use cpu_env in cpu_untagged_addrRichard Henderson
In a completely artifical memset benchmark object_dynamic_cast_assert dominates the profile, even above guest address resolution and the underlying host memset. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240702154911.1667418-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Allow FPCR bits that aren't in FPSCRPeter Maydell
In order to allow FPCR bits that aren't in the FPSCR (like the new bits that are defined for FEAT_AFP), we need to make sure that writes to the FPSCR only write to the bits of FPCR that are architecturally mapped, and not the others. Implement this with a new function vfp_set_fpcr_masked() which takes a mask of which bits to update. (We could do the same for FPSR, but we leave that until we actually are likely to need it.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-10-peter.maydell@linaro.org
2024-07-11target/arm: Rename FPSR_MASK and FPCR_MASK and define them symbolicallyPeter Maydell
Now that we store FPSR and FPCR separately, the FPSR_MASK and FPCR_MASK macros are slightly confusingly named and the comment describing them is out of date. Rename them to FPSCR_FPSR_MASK and FPSCR_FPCR_MASK, document that they are the mask of which FPSCR bits are architecturally mapped to which AArch64 register, and define them symbolically rather than as hex values. (This latter requires defining some extra macros for bits which we haven't previously defined.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-9-peter.maydell@linaro.org