Age | Commit message (Collapse) | Author |
|
FEAT_XSAVE_XSS_HI leafs
The value of FEAT_XSAVE_XCR0_HI leaf and FEAT_XSAVE_XSS_HI leaf also
need to be masked by XCR0 and XSS mask respectively, to make it
logically correct.
Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-3-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a11a365159b944e05be76f3ec3b98c8b38cb70fd)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Leaf FEAT_XSAVE_XSS_LO and FEAT_XSAVE_XSS_HI also need to be cleared
when CPUID_EXT_XSAVE is not set.
Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-2-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 81f5cad3858f27623b1b14467926032d229b76cc)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
ARM_FEATURE_PMU
It doesn't make sense to read the value of MDCR_EL2 on a non-A-profile
CPU, and in fact if you try to do it we will assert:
#6 0x00007ffff4b95e96 in __GI___assert_fail
(assertion=0x5555565a8c70 "!arm_feature(env, ARM_FEATURE_M)", file=0x5555565a6e5c "../../target/arm/helper.c", line=12600, function=0x5555565a9560 <__PRETTY_FUNCTION__.0> "arm_security_space_below_el3") at ./assert/assert.c:101
#7 0x0000555555ebf412 in arm_security_space_below_el3 (env=0x555557bc8190) at ../../target/arm/helper.c:12600
#8 0x0000555555ea6f89 in arm_is_el2_enabled (env=0x555557bc8190) at ../../target/arm/cpu.h:2595
#9 0x0000555555ea942f in arm_mdcr_el2_eff (env=0x555557bc8190) at ../../target/arm/internals.h:1512
We might call pmu_counter_enabled() on an M-profile CPU (for example
from the migration pre/post hooks in machine.c); this should always
return false because these CPUs don't set ARM_FEATURE_PMU.
Avoid the assertion by not calling arm_mdcr_el2_eff() before we
have done the early return for "PMU not present".
This fixes an assertion failure if you try to do a loadvm or
savevm for an M-profile board.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2155
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240208153346.970021-1-peter.maydell@linaro.org
(cherry picked from commit ac1d88e9e7ca0bed83e91e07ce6d0597f10cc77d)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The TBI and TCMA bits are located within mtedesc, not desc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 855f94eca80c85a99f459e36684ea2f98f6a3243)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
These functions "use the standard load helpers", but
fail to clean_data_tbi or populate mtedesc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 623507ccfcfebb0f10229ae5de3f85a27fb615a7)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Share code that creates mtedesc and embeds within simd_desc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 96fcc9982b4aad7aced7fbff046048bbccc6cb0c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
When we added SVE_MTEDESC_SHIFT, we effectively limited the
maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining
bits (32 - 10 - 5 - 12 == 5). Assert that the data to be stored
fits within the field (expecting 8 * 4 - 1 == 31, exact fit).
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit b12a7671b6099a26ce5d5ab09701f151e21c112c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The field is encoded as [0-3], which is convenient for
indexing our array of function pointers, but the true
value is [1-4]. Adjust before calling do_mem_zpa.
Add an assert, and move the comment re passing ZT to
the helper back next to the relevant code.
Cc: qemu-stable@nongnu.org
Fixes: 206adacfb8d ("target/arm: Add mte helpers for sve scalar + int loads")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 64c6e7444dff64b42d11b836b9aec9acfbe8ecc2)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
In commit 4315f7c614743 we restructured the logic for creating the
VFP related properties to avoid testing the aa32_simd_r32 feature on
AArch64 CPUs. However in the process we accidentally stopped
exposing the "vfp" QOM property on AArch32 TCG CPUs.
This mostly hasn't had any ill effects because not many people want
to disable VFP, but it wasn't intentional. Reinstate the property.
Cc: qemu-stable@nongnu.org
Fixes: 4315f7c614743 ("target/arm: Restructure has_vfp_d32 test")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2098
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240126193432.2210558-1-peter.maydell@linaro.org
(cherry picked from commit 185e3fdf8d106cb2f7d234d5e6453939c66db2a9)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Debug exceptions that target AArch32 Hyp mode are reported differently
than on AAarch64. Internally, Qemu uses the AArch64 syndromes. Therefore
such exceptions need to be either converted to a prefetch abort
(breakpoints, vector catch) or a data abort (watchpoints).
Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Klötzke <jan.kloetzke@kernkonzept.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240127202758.3326381-1-jan.kloetzke@kernkonzept.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit f670be1aad33e801779af580398895b9455747ee)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
A typo in the implementation of isar_feature_aa64_tidcp1() means we
were checking the field in the wrong ID register, so we might have
provided the feature on CPUs that don't have it and not provided
it on CPUs that should have it. Correct this bug.
Cc: qemu-stable@nongnu.org
Fixes: 9cd0c0dec97be9 "target/arm: Implement FEAT_TIDCP1"
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2120
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240123160333.958841-1-peter.maydell@linaro.org
(cherry picked from commit ee0a2e3c9d2991a11c13ffadb15e4d0add43c257)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
In commit 1b7bc9b5c8bf374dd we changed handle_vec_simd_sqshrn() so
that instead of starting with a 0 value and depositing in each new
element from the narrowing operation, it instead started with the raw
result of the narrowing operation of the first element.
This is fine in the vector case, because the deposit operations for
the second and subsequent elements will always overwrite any higher
bits that might have been in the first element's result value in
tcg_rd. However in the scalar case we only go through this loop
once. The effect is that for a signed narrowing operation, if the
result is negative then we will now return a value where the bits
above the first element are incorrectly 1 (because the narrowfn
returns a sign-extended result, not one that is truncated to the
element size).
Fix this by using an extract operation to get exactly the correct
bits of the output of the narrowfn for element 1, instead of a
plain move.
Cc: qemu-stable@nongnu.org
Fixes: 1b7bc9b5c8bf374dd3 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240123153416.877308-1-peter.maydell@linaro.org
(cherry picked from commit 6fffc8378562c7fea6290c430b4f653f830a4c1a)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
r[id]tlb[01], [iw][id]tlb opcodes use TLB way index passed in a register
by the guest. The host uses 3 bits of the index for ITLB indexing and 4
bits for DTLB, but there's only 7 entries in the ITLB array and 10 in
the DTLB array, so a malicious guest may trigger out-of-bound access to
these arrays.
Change split_tlb_entry_spec return type to bool to indicate whether TLB
way passed to it is valid. Change get_tlb_entry to return NULL in case
invalid TLB way is requested. Add assertion to xtensa_tlb_get_entry that
requested TLB way and entry indices are valid. Add checks to the
[rwi]tlb helpers that requested TLB way is valid and return 0 or do
nothing when it's not.
Cc: qemu-stable@nongnu.org
Fixes: b67ea0cd7441 ("target-xtensa: implement memory protection options")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20231215120307.545381-1-jcmvbkbc@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 604927e357c2b292c70826e4ce42574ad126ef32)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
For PC-relative translation blocks, env->eip changes during the
execution of a translation block, Therefore, QEMU must be able to
recover an instruction's PC just from the TranslationBlock struct and
the instruction data with. Because a TB will not span two pages, QEMU
stores all the low bits of EIP in the instruction data and replaces them
in x86_restore_state_to_opc. Bits 12 and higher (which may vary between
executions of a PCREL TB, since these only use the physical address in
the hash key) are kept unmodified from env->eip. The assumption is that
these bits of EIP, unlike bits 0-11, will not change as the translation
block executes.
Unfortunately, this is incorrect when the CS base is not aligned to a page.
Then the linear address of the instructions (i.e. the one with the
CS base addred) indeed will never span two pages, but bits 12+ of EIP
can actually change. For example, if CS base is 0x80262200 and EIP =
0x6FF4, the first instruction in the translation block will be at linear
address 0x802691F4. Even a very small TB will cross to EIP = 0x7xxx,
while the linear addresses will remain comfortably within a single page.
The fix is simply to use the low bits of the linear address for data[0],
since those don't change. Then x86_restore_state_to_opc uses tb->cs_base
to compute a temporary linear address (referring to some unknown
instruction in the TB, but with the correct values of bits 12 and higher);
the low bits are replaced with data[0], and EIP is obtained by subtracting
again the CS base.
Huge thanks to Mark Cave-Ayland for the image and initial debugging,
and to Gitlab user @kjliew for help with bisecting another occurrence
of (hopefully!) the same bug.
It should be relatively easy to write a testcase that performs MMIO on
an EIP with different bits 12+ than the first instruction of the translation
block; any help is welcome.
Fixes: e3a79e0e878 ("target/i386: Enable TARGET_TB_PCREL", 2022-10-11)
Cc: qemu-stable@nongnu.org
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1759
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1964
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2012
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 729ba8e933f8af5800c3a92b37e630e9bdaa9f1e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The PCREL patches introduced a bug when updating EIP in the !CF_PCREL case.
Using s->pc in func gen_update_eip_next() solves the problem.
Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Signed-off-by: guoguangyao <guoguangyao18@mails.ucas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240115020804.30272-1-guoguangyao18@mails.ucas.ac.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2926eab8969908bc068629e973062a0fb6ff3759)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
With PCREL, we have a page-relative view of EIP, and an
approximation of PC = EIP+CSBASE that is good enough to
detect page crossings. If we try to recompute PC after
masking EIP, we will mess up that approximation and write
a corrupt value to EIP.
We already handled masking properly for PCREL, so the
fix in b5e0d5d2 was only needed for the !PCREL path.
Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240101230617.129349-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a58506b748b8988a95f4fa1a2420ac5c17038b30)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Put correct values (depending on CPU arch) into IOR and ISR on fault.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 31efbe72c6cc54b9cbc2505d78870a8a87a8d392)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Put correct values (depending on CPU arch) into IOR and ISR on fault.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 910ada0225d17530188aa45afcb9412c17267f46)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Move functionality to set IOR and ISR on fault into own
function. This will be used by follow-up patches.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 3824e0d643f34ee09e0cc75190c0c4b60928b78c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The value of unwind_breg may reference register %r0, but we need to avoid
accessing gr0 directly and use the value 0 instead.
At runtime I've seen unwind_breg being zero with the Linux kernel when
rfi is used to jump to smp_callin().
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Bruno Haible <bruno@clisp.org>
(cherry picked from commit 5915b67013eb8c3a84e3ef05e6ba4eae55ccd173)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Fix the address translation for PDC space on PA2.0 if PSW.W=0.
Basically, for any address in the 32-bit PDC range from 0xf0000000 to
0xf1000000 keep the lower 32-bits and just set the upper 32-bits to
0xfffffff0.
This mapping fixes the emulated power button in PDC space for 32- and
64-bit machines and is how the physical C3700 machine seems to map
PDC.
Figures H-10 and H-11 in the parisc2.0 spec [1] show that the 32-bit
region will be mapped somewhere into a higher and bigger 64-bit PDC
space. The start and end of this 64-bit space is defined by the
physical address bits. But the figures don't specifiy where exactly the
mapping will start inside that region. Tests on a real HP C3700
regarding the address of the power button indicate, that the lower
32-bits will stay the same though.
[1] https://parisc.wiki.kernel.org/images-parisc/7/73/Parisc2.0.pdf
Signed-off-by: Helge Deller <deller@gmx.de>
Tested-by: Bruno Haible <bruno@clisp.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 6ce18d530638f6e4eb87ef8737c634e34362ad2b)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
LAE should set the access register corresponding to the first operand,
instead, it always modifies access register 1.
Co-developed-by: Ido Plat <Ido.Plat@ibm.com>
Cc: qemu-stable@nongnu.org
Fixes: a1c7610a6879 ("target-s390x: implement LAY and LAEY instructions")
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240111092328.929421-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit e358a25a97c71c39e3513d9b869cdb82052e50b8)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The mcycle/minstret counter's stop flag is mistakenly updated on a copy
on stack. Thus the counter increments even when the CY/IR bit in the
mcountinhibit register is set. This commit corrects its behavior.
Fixes: 3780e33732f88 (target/riscv: Support mcycle/minstret write operation)
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 5cb0e7abe1635cb82e0033260dac2b910d142f8c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
strerrorname_np is non-portable and breaks building with musl libc.
Use strerror(errno) instead, like we do other places.
Cc: qemu-stable@nongnu.org
Fixes: commit 082e9e4a58ba (target/riscv/kvm: improve 'init_multiext_cfg' error msg)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2041
Buglink: https://gitlab.alpinelinux.org/alpine/aports/-/issues/15541
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit d424db235434b8356c6b2d9420b846c7ddcc83ea)
|
|
In 32-bit mode, pc = eip + cs_base is also 32-bit, and must wrap.
Failure to do so results in incorrect memory exceptions to the guest.
Before 732d548732ed, this was implicitly done via truncation to
target_ulong but only in qemu-system-i386, not qemu-system-x86_64.
To fix this, we must add conditional zero-extensions.
Since we have to test for 32 vs 64-bit anyway, note that cs_base
is always zero in 64-bit mode.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2022
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20231212172510.103305-1-richard.henderson@linaro.org>
|
|
Commit 7191f24c7fcf ("accel/kvm/kvm-all: Handle register access errors")
added error checking for KVM_SET_SREGS/KVM_SET_SREGS2. In doing so, it
exposed a long-running bug in current KVM support for SEV-ES where the
kernel assumes that MSR_EFER_LMA will be set explicitly by the guest
kernel, in which case EFER write traps would result in KVM eventually
seeing MSR_EFER_LMA get set and recording it in such a way that it would
be subsequently visible when accessing it via KVM_GET_SREGS/etc.
However, guest kernels currently rely on MSR_EFER_LMA getting set
automatically when MSR_EFER_LME is set and paging is enabled via
CR0_PG_MASK. As a result, the EFER write traps don't actually expose the
MSR_EFER_LMA bit, even though it is set internally, and when QEMU
subsequently tries to pass this EFER value back to KVM via
KVM_SET_SREGS* it will fail various sanity checks and return -EINVAL,
which is now considered fatal due to the aforementioned QEMU commit.
This can be addressed by inferring the MSR_EFER_LMA bit being set when
paging is enabled and MSR_EFER_LME is set, and synthesizing it to ensure
the expected bits are all present in subsequent handling on the host
side.
Ultimately, this handling will be implemented in the host kernel, but to
avoid breaking QEMU's SEV-ES support when using older host kernels, the
same handling can be done in QEMU just after fetching the register
values via KVM_GET_SREGS*. Implement that here.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
Cc: Lara Lazier <laramglazier@gmail.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc: <kvm@vger.kernel.org>
Fixes: 7191f24c7fcf ("accel/kvm/kvm-all: Handle register access errors")
Signed-off-by: Michael Roth <michael.roth@amd.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231206155821.1194551-1-michael.roth@amd.com>
|
|
Misc fixes for 8.2
- memory: Avoid unaligned accesses (Patrick)
- target/riscv: Fix variable shadowing (Daniel)
- tests/avocado: Update URL, skip flaky test (Alex, Phil)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVt7w4ACgkQ4+MsLN6t
# wN7AzA/+N1ec1I2IC0LYT8ThBCyV/92o+1FH5KaIxiVN5Ty3cACnGAac8IJ1K2tC
# 5WG8Dxg4rq+bM16lUq9ME7k82Y3PoxLoRQa8fuClFKdHMXU2sgY4OwTx2606cO4B
# 0H4gR+i3XgrgIDo8qRezWX0JSd1Srrz9QPlcq6kJfDtRq7DU0329aOobkyzuUJPb
# DJD9YPu9y0KokBCBuVlt5ypNSM9xJGRtznFt1sFfNyPgNOnie3s+fYpPn26UigcY
# 8OY/PMS5hapDw/s/gFKWQb/nCTSRnJKZ5dODOjHXK8HvTbYHedw8C4apXyjSXwBI
# fBerNEKJHwD/1QkFhbNCIwsidH72BWeHljrelbZlsUfXU1VcCqNiXV8d2R9ak3xt
# lrW7UcytC1+PqhmIVEXAOA4cwJcq6Hs9IcZ9G9aMvcmlhY3Fv0UKMoRYgsGTULeP
# ySQF3FDSc6dldsfBdwqHGHlwL12EYmpN2sIhEc5aQ9y5Mmuj2FBOYHUbPpyoVLpw
# e45n9Epc43GUVCMj3lZNjWKd+87nGjwKOM3rpBlcaSG3JRkLUe8o5APxeYzePZfO
# 2IOWcGnrUixsvWbNY+6JF98n5hQK7Va3h/e1YbS1K2OZ013LT1SNiZ1LOrl9KiXf
# agY6HMukKbCpJAqpXnMbgWdxCb7GdtjVaWKIVEgejeZTVdH4f8Y=
# =r21r
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Dec 2023 10:23:58 EST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'misc-fixes-20231204' of https://github.com/philmd/qemu:
tests/avocado: mark ReplayKernelNormal.test_mips64el_malta as flaky
tests/avocado: Update yamon-bin-02.22.zip URL
target/riscv/kvm: fix shadowing in kvm_riscv_(get|put)_regs_csr
system/memory: use ldn_he_p/stn_he_p
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Turn off SME if SVE is turned off (this combination doesn't
currently work and QEMU will assert if you try it)
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmVt3wQZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vrmD/9zu48IxCdHFSshMRmXz6kI
# tMvTrsMSOGXfuQqCbvLn3CUP/La50Yt/T1C2TKzVII1W8zpw8wEEvraCBjexzUzK
# Jcjw0dPSIllQOHBkoUGsgqA0+UkhfIwH0po10rxm1L+ZP3DfISVdyDV9oxCNfEO0
# pGXI1eAN9GIQtJtUj3kZE+RUoamJfoSjlm5XVeX3T+utEU7yf1461L1/qaylYOrW
# wao72ffbuf41jRJwnVmMFoIPrwueYtEeuKl/EgYU4YPxkSQEo34u6d9fz2Irt6/Y
# utO2SffhhmlxQaFhgPX3hvAsfapMt/p2Jy6oUpThOjN75adCq+g1CYj7lzEfIX16
# kb2CY8zQ8NboJtgnkiQAA062myURnk/kmulv0OF6Hh0jHSuLzuMMLcCfBJgq4H6s
# mnBCJfetwRgwqcSl1JTfrMm4wYOLmSrmOcM5JjYwY2YYjnFXI+XB1MdKm0h8cROG
# nFu5TZtNnxgzqBgoh1140AYN851Y1dshczZIHb1/YuNpBIl+ZUO4v5sRT3KBSzb+
# G21570neBv8QcfDSgrLesrjNBDREfkaWEu9BM85461uTjbCLG8RUpn+Jd4VtpkNe
# YVzomhuM9CI5CmYdrTMJ74gnZUtAT9Q3FTcfGL8G4KiSIe85BTw+gEy4PhLXD6FT
# 68fP1M+s8/hsuXCJYbvmAA==
# =K/u0
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Dec 2023 09:15:32 EST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20231204-1' of https://git.linaro.org/people/pmaydell/qemu-arm:
target/arm: Disable SME if SVE is disabled
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() use an 'int ret' variable
that is used to do an early 'return' if ret > 0. Both are being called
in functions that are also declaring a 'ret' integer, initialized with
'0', and this integer is used as return of the function.
The result is that the compiler is less than pleased and is pointing
shadowing errors:
../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_get_regs_csr':
../target/riscv/kvm/kvm-cpu.c:90:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
90 | int ret = kvm_get_one_reg(cs, RISCV_CSR_REG(env, csr), ®); \
| ^~~
../target/riscv/kvm/kvm-cpu.c:539:5: note: in expansion of macro 'KVM_RISCV_GET_CSR'
539 | KVM_RISCV_GET_CSR(cs, env, sstatus, env->mstatus);
| ^~~~~~~~~~~~~~~~~
../target/riscv/kvm/kvm-cpu.c:536:9: note: shadowed declaration is here
536 | int ret = 0;
| ^~~
../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_put_regs_csr':
../target/riscv/kvm/kvm-cpu.c:98:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
98 | int ret = kvm_set_one_reg(cs, RISCV_CSR_REG(env, csr), ®); \
| ^~~
../target/riscv/kvm/kvm-cpu.c:556:5: note: in expansion of macro 'KVM_RISCV_SET_CSR'
556 | KVM_RISCV_SET_CSR(cs, env, sstatus, env->mstatus);
| ^~~~~~~~~~~~~~~~~
../target/riscv/kvm/kvm-cpu.c:553:9: note: shadowed declaration is here
553 | int ret = 0;
| ^~~
The macros are doing early returns for non-zero returns and the local
'ret' variable for both functions is used just to do 'return 0', so
remove them from kvm_riscv_get_regs_csr() and kvm_riscv_put_regs_csr()
and do a straight 'return 0' in the end.
For good measure let's also rename the 'ret' variables in
KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() to '_ret' to make them more
resilient to these kind of errors.
Fixes: 937f0b4512 ("target/riscv: Implement kvm_arch_get_registers")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231123101338.1040134-1-dbarboza@ventanamicro.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
Replaces TABS with spaces to ensure have a consistent coding
style with an indentation of 4 spaces in the SH4 subsystem.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/376
Signed-off-by: Yihuan Pan <xun794@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20231124044554.513752-1-xun794@gmail.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
There is no architectural requirement that SME implies SVE, but
our implementation currently assumes it. (FEAT_SME_FA64 does
imply SVE.) So if you try to run a CPU with eg "-cpu max,sve=off"
you quickly run into an assert when the guest tries to write to
SMCR_EL1:
#6 0x00007ffff4b38e96 in __GI___assert_fail
(assertion=0x5555566e69cb "sm", file=0x5555566e5b24 "../../target/arm/helper.c", line=6865, function=0x5555566e82f0 <__PRETTY_FUNCTION__.31> "sve_vqm1_for_el_sm") at ./assert/assert.c:101
#7 0x0000555555ee33aa in sve_vqm1_for_el_sm (env=0x555557d291f0, el=2, sm=false) at ../../target/arm/helper.c:6865
#8 0x0000555555ee3407 in sve_vqm1_for_el (env=0x555557d291f0, el=2) at ../../target/arm/helper.c:6871
#9 0x0000555555ee3724 in smcr_write (env=0x555557d291f0, ri=0x555557da23b0, value=2147483663) at ../../target/arm/helper.c:6995
#10 0x0000555555fd1dba in helper_set_cp_reg64 (env=0x555557d291f0, rip=0x555557da23b0, value=2147483663) at ../../target/arm/tcg/op_helper.c:839
#11 0x00007fff60056781 in code_gen_buffer ()
Avoid this unsupported and slightly odd combination by
disabling SME when SVE is not present.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2005
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231127173318.674758-1-peter.maydell@linaro.org
|
|
Misc fixes for 8.2
* buildsys: Invoke bash via 'env' (Samuel)
* doc: Fix example in s390-cpu-topology.rst (Zhao)
* HW: Fix AVR ATMega reset stack (Gihun) and VT82C686 IRQ routing (Zoltan)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVl7MUACgkQ4+MsLN6t
# wN4nsQ//U7/GGrMaNJF369pC0UfC0dfD39RoD9jmmrWUQB17baMvXo+BMBcELX0Q
# BtgRjIYwnywnVZlB11JL5Ql9ykSRqd7VeqnZfH//GqQO+ySF7jl6ekNT6YNjUbWu
# iF9bU3o0/LAVl/3pe9LQ4q/yOjzERA5o4JKYviHZYcWE811/5lBNgER4iPyCz6a8
# aGI3S5PGmq6a9x5266jkY2WWldDy7D1ujkuvxxc4tgnmbBjL21soJ/oRLOBjGTNl
# hCRfDTEiFZm7OxjV7oB03Nr3EGGStGdy0aPhhtFwzZxQ9yV7d2DLsbYGgwzZYkKQ
# 9v4DtGqYyvDA7LBmfxOrnzL0WXgN4xO3qekLqHDtChDzFFEYwtHvH0duPUiQv1Yu
# qHyOsfB58rKzWHeo0ACEjMWGdD1opCXCeoJlEf/saiQ5EgyBwph/z2mWYN4yak5H
# Zu3xF15BcnyavC6sVeuE+rT574dhCzOtH8Vf3WVwqfL5D5cyCjHlmPSAXXMqBkmh
# BMOD8O210n6IdzuuOQ038t3yGvIc0YysOmQgfLjRYlZa884q3wExgrufH+NYbGMj
# bFthPjLKgHm+q4k2mH65G98xwXQFT6rdHanw2iEJcPJbhhk9SNWYgaQ0r0Oi2Pfd
# zCQ22F1j9UqGcqKh+8tzAfjayRyQUJtgizPXEWanADkpIDYxrRk=
# =323/
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 28 Nov 2023 08:36:05 EST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'misc-next-20231128' of https://github.com/philmd/qemu:
docs/s390: Fix wrong command example in s390-cpu-topology.rst
hw/avr/atmega: Fix wrong initial value of stack pointer
hw/audio/via-ac97: Route interrupts using via_isa_set_irq()
hw/isa/vt82c686: Route PIRQ inputs using via_isa_set_irq()
hw/usb/vt82c686-uhci-pci: Use ISA instead of PCI interrupts
hw/isa/vt82c686: Bring back via_isa_set_irq()
target/hexagon/idef-parser/prepare: use env to invoke bash
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The current implementation initializes the stack pointer of AVR devices
to 0. Although older AVR devices used to be like that, newer ones set
it to RAMEND.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1525
Signed-off-by: Gihun Nam <gihun.nam@outlook.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <PH0P222MB0010877445B594724D40C924DEBDA@PH0P222MB0010.NAMP222.PROD.OUTLOOK.COM>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
This file is the only one involved in the compilation process which
still uses the /bin/bash path.
Signed-off-by: Samuel Tardieu <sam@rfc1149.net>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-ID: <20231123211506.636533-1-sam@rfc1149.net>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
In commit edac4d8a168 back in 2015 when we added support for
the virtual timer offset CNTVOFF_EL2, we didn't correctly update
the timer-recalculation code that figures out when the timer
interrupt is next going to change state. We got it wrong in
two ways:
* for the 0->1 transition, we didn't notice that gt->cval + offset
can overflow a uint64_t
* for the 1->0 transition, we didn't notice that the transition
might now happen before the count rolls over, if offset > count
In the former case, we end up trying to set the next interrupt
for a time in the past, which results in QEMU hanging as the
timer fires continuously.
In the latter case, we would fail to update the interrupt
status when we are supposed to.
Fix the calculations in both cases.
The test case is Alex Bennée's from the bug report, and tests
the 0->1 transition overflow case.
Fixes: edac4d8a168 ("target-arm: Add CNTVOFF_EL2")
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/60
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231120173506.3729884-1-peter.maydell@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The syndrome register value always has an IL field at bit 25, which
is 0 for a trap on a 16 bit instruction, and 1 for a trap on a 32
bit instruction (or for exceptions which aren't traps on a known
instruction, like PC alignment faults). This means that our
syn_*() functions should always either take an is_16bit argument to
determine whether to set the IL bit, or else unconditionally set it.
We missed setting the IL bit for the syndrome for three kinds of trap:
* an SVE access exception
* a pointer authentication check failure
* a BTI (branch target identification) check failure
All of these traps are AArch64 only, and so the instruction causing
the trap is always 64 bit. This means we can unconditionally set
the IL bit in the syn_*() function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231120150121.3458408-1-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
According to RISCV Specification sect 9.5 on two stage translation when
V=1 the vsstatus(mstatus in QEMU's terms) field MXR, which makes
execute-only pages readable, only overrides VS-stage page protection.
Setting MXR at HS-level(mstatus_hs), however, overrides both VS-stage
and G-stage execute-only permissions.
The hypervisor extension changes the behavior of MXR\MPV\MPRV bits.
Due to RISCV Specification sect. 9.4.1 when MPRV=1, explicit memory
accesses are translated and protected, and endianness is applied, as
though the current virtualization mode were set to MPV and the current
nominal privilege mode were set to MPP. vsstatus.MXR makes readable
those pages marked executable at the VS translation stage.
Fixes: 36a18664ba ("target/riscv: Implement second stage MMU")
Signed-off-by: Ivan Klokov <ivan.klokov@syntacore.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231121071757.7178-3-ivan.klokov@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
According to RISCV privileged spec sect. 5.3.2 Virtual Address Translation Process
access-fault exceptions may raise only after PMA/PMP check. Current implementation
generates an access-fault for mbare mode even if there were no PMA/PMP errors.
This patch removes the erroneous MMU mode check and generates an access-fault
exception based on the pmp_violation flag only.
Fixes: 1448689c7b ("target/riscv: Allow specifying MMU stage")
Signed-off-by: Ivan Klokov <ivan.klokov@syntacore.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Message-ID: <20231121071757.7178-2-ivan.klokov@syntacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
The extensions zicntr and zihpm were officially added in the privilege
instruction set specification 1.12. However, QEMU has been implemented
them long before it and thus they are forced to be on during the cpu
initialization to ensure compatibility (see riscv_cpu_init).
riscv_cpu_disable_priv_spec_isa_exts was not updated when the above
behavior was introduced, resulting in these extensions to be disabled
after all.
Signed-off-by: Clément Chigot <chigot@adacore.com>
Fixes: c004099330 ("target/riscv: add zicntr extension flag for TCG")
Fixes: 0824121660 ("target/riscv: add zihpm extension flag for TCG")
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231114123913.536194-1-chigot@adacore.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
|
|
https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* enable FEAT_RNG on Neoverse-N2
* hw/intc/arm_gicv3: ICC_PMR_EL1 high bits should be RAZ
* Fix SME FMOPA (16-bit), BFMOPA
* hw/core/machine: Constify MachineClass::valid_cpu_types[]
* stm32f* machines: Report error when user asks for wrong CPU type
* hw/arm/fsl-imx: Do not ignore Error argument
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmVchLYZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3kHMD/47tKxzrsXc6+V9esRQGi2H
# 1hAgLBwglEdxLXokF+Di41sh/fvK7wYVXO/hiWlq+9h3kG3D/u1N5r1TdMPMUb9j
# 4Sg3rOejn7nzkxVZ6MZ/K/1j84C9bfrt4sboVHZVRvWuvbiyuTuivEr4IqLYO4x3
# AIwhFMQ5gbNrmClZh/DBxj0keO13cp63Fg2JSSICdi+1Dw9rRXTyhJloMu1omeqc
# k/BXzjSeNXpLSMyGWBR3uaPcJBaGC1xnz3Z1V7fUY1EYD2Cu1oo5lEZ9aNO5t30d
# XW/qVGLa3b1Cb7WuEO247RnU3N2oZotozjFtdj/8IQoYWspM9RHyipEimUlegVdO
# 3fpu8QGsN1ljNiwjdk0i6OwS7SGxcPtteFOaqEf/Yogj4EOKTn/Rx5TT4vJ5DhmI
# 2w/9J15JWDIE1paNwecuFWbxCOOzSsOtSxzuyLSZDU3GlNfJ4zoF6YboROLYfejy
# NXZABFhGd/0ykX7r0VY1GGYXUQ+akv6q+VDmVZCP9gMiRUiqmFPwMLMLlcuHb8G5
# 8UztN5SvOG2EYXj28Zx0BnGCNiGdI15rWMb0veqAtbnn3yEdltW3O475BAhZ0PB7
# OVpLWnXwmWURm/BGlwb1PH5s3kgWgzOebcBgcnCftwFQ8EedQAQDA5FmT+nK5SfV
# VoOf89PngTubU6B3BOfeBw==
# =thIa
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 21 Nov 2023 05:21:42 EST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20231121' of https://git.linaro.org/people/pmaydell/qemu-arm:
hw/arm/fsl-imx: Do not ignore Error argument
hw/arm/stm32f100: Report error when incorrect CPU is used
hw/arm/stm32f205: Report error when incorrect CPU is used
hw/arm/stm32f405: Report error when incorrect CPU is used
hw/core/machine: Constify MachineClass::valid_cpu_types[]
target/arm: Fix SME FMOPA (16-bit), BFMOPA
hw/intc/arm_gicv3: ICC_PMR_EL1 high bits should be RAZ
target/arm: enable FEAT_RNG on Neoverse-N2
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The patch below fixes a bug in the VSX_CVT_FP_TO_INT and VSX_CVT_FP_TO_INT2
macros in target/ppc/fpu_helper.c where a non-NaN floating point value from the
source vector is incorrectly converted to 0, 0x80000000, or 0x8000000000000000
instead of the expected value if a preceding source floating point value from
the same source vector was a NaN.
The bug in the VSX_CVT_FP_TO_INT and VSX_CVT_FP_TO_INT2 macros in
target/ppc/fpu_helper.c was introduced with commit c3f24257e3c0.
This patch also adds a new vsx_f2i_nan test in tests/tcg/ppc64 that checks that
the VSX xvcvspsxws, xvcvspuxws, xvcvspsxds, xvcvspuxds, xvcvdpsxws, xvcvdpuxws,
xvcvdpsxds, and xvcvdpuxds instructions correctly convert non-NaN floating point
values to integer values if the source vector contains NaN floating point values.
Fixes: c3f24257e3c0 ("target/ppc: Clear fpstatus flags on helpers missing it")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1941
Signed-off-by: John Platts <john_platts@hotmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
|
|
Perform the loop increment unconditionally, not nested
within the predication.
Cc: qemu-stable@nongnu.org
Fixes: 3916841ac75 ("target/arm: Implement FMOPA, FMOPS (widening)")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1985
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20231117193135.1180657-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
I noticed that Neoverse-V1 has FEAT_RNG enabled so let enable it also on
Neoverse-N2.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231114103443.1652308-1-marcin.juszkiewicz@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
https://github.com/hdeller/qemu-hppa into staging
HPPA64-PATCHES-for-8.2
Two patches for 8.2.
The SHRPD patch fixes a real translation bug which then allows to boot
the 64-bit Linux kernels of the Debian-11 and Debian-12 installation CDs.
The second patch adds the instruction byte sequence to the
assembly log. This is not an actual bug fix, but it's important since
it helps a lot when trying to fix qemu translation bugs on hppa.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCZVfHPwAKCRD3ErUQojoP
# X3TrAQD2SfFsTWIYqTamh1ZHmydaJRL1xhXmPMqXgXFkDmiyhQD/VeyIyWEGj5Oe
# x70WR8HrtkadsUddgSGzFRChaVb0/wI=
# =Sapq
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 17 Nov 2023 15:04:15 EST
# gpg: using EDDSA key BCE9123E1AD29F07C049BBDEF712B510A23A0F5F
# gpg: Good signature from "Helge Deller <deller@gmx.de>" [unknown]
# gpg: aka "Helge Deller <deller@kernel.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4544 8228 2CD9 10DB EF3D 25F8 3E5F 3D04 A7A2 4603
# Subkey fingerprint: BCE9 123E 1AD2 9F07 C049 BBDE F712 B510 A23A 0F5F
* tag 'hppa64-fixes-pull-request' of https://github.com/hdeller/qemu-hppa:
disas/hppa: Show hexcode of instruction along with disassembly
target/hppa: Fix 64-bit SHRPD instruction
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Error reporting patches for 2023-11-17
# -----BEGIN PGP SIGNATURE-----
#
# iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmVXLq8SHGFybWJydUBy
# ZWRoYXQuY29tAAoJEDhwtADrkYZTBIsP/3vTjS2QJ2JdjgZV7ARGyfAxsPbG4TS2
# JHqFsF37vY5u+gYjcBJsmDY8YBpYWQFkOYJ8RJtCdedOnW2gML88vc3XKcUrUc7T
# ebN8KnpA8mx5nr0SMGD+/w72xZl917lGFhXRqazvS2i0dbJvuAsacoo300oIZncx
# 5480GiJpNc/QBUdU9ywFWwQOVzJynn32e1OFWLmbL2xH+kcgMbgWgrEMQUb0D99+
# J9PjKCJxVlJFKEjph7iLCahID5V1gjJTzp3iESOWbO7BTFuKJZ8E510oXd1ng86c
# JLOEcu4vhC4JNvMx5R31nVz4LXfQD8Hf1pSVL64gTybVq3bEMhv/wLUuG/AcPIuL
# t1GxRhGqY2yXbnP3GfP9xNhFps0uLmJF7g5Q/ao2sEwOiPmGmNKcK7xV6OkYJdIr
# isbb+bot19NN+B7r1ZWkb7BEhM99PtHZtsrmnPZ7T19CX/cy2k0D0W78nqJE0AJU
# sBhwuntou+DvBbLB3KD33OcE0UI93IxICaqR56q4lwydYOQ4p8VCTRI1aoDrZpPx
# Q+kMs+sy3q7CGMKEScnb+HeA8KuvKFGqw3XKJwYQsTITRd+NdWnQ7dKAC2J2sRvO
# DRGhUEmOiaDv3HdmToV3owfLsH9raK6Oh8KYjxiOoiJ1Tb0+sZvxayemQ97mRVuJ
# r+yle/BX1ODY
# =7QAS
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 17 Nov 2023 04:13:19 EST
# gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg: issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* tag 'pull-error-2023-11-17' of https://repo.or.cz/qemu/armbru:
target/i386/cpu: Improve error message for property "vendor"
balloon: Fix a misleading error message
net: Fix a misleading error message
ui/qmp-cmds: Improve two error messages
qga: Improve guest-exec-status error message
hmp: Improve sync-profile error message
spapr/pci: Correct "does not support hotplugging error messages
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
trivial patches for 2023-11-16
# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAmVVxz4PHG1qdEB0bHMu
# bXNrLnJ1AAoJEHAbT2saaT5ZI+cH+wexpGPHmmWHaA0moo+1MZPC3pbEvOXq184b
# oeGRUidq89380DzsxkIxrDn98KisKnIX3oGZ56Q394Ntg7J2xyFN/KsvQhzpElSb
# 01Ws90NVoHIXoXZKNIOFZXkqOLCB+kwqZ1PFiYwALEJkEPBfpV40dTWuyCnxh1D8
# lKHtk5bLKzDbTmDYYfnZ7zkP6CLMhRH7A7evdb/4+W+phbqTHeKbSgq8QhNvVX8n
# 38yzPTQPlMyXHw7Psio62N7wz86wEiGkYELud1nPPlA902paM5FHMdjYBohm/ZCM
# 4E12gzMg4SgwBIsWoyE/1tUAjyJXeChocxOVLFqDXXaiYgomAh0=
# =x0bq
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 16 Nov 2023 02:39:42 EST
# gpg: using RSA key 7B73BAD68BE7A2C289314B22701B4F6B1A693E59
# gpg: issuer "mjt@tls.msk.ru"
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" [full]
# gpg: aka "Michael Tokarev <mjt@corpit.ru>" [full]
# gpg: aka "Michael Tokarev <mjt@debian.org>" [full]
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5
# Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59
* tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu: (27 commits)
util/range.c: spelling fix: inbetween
util/filemonitor-inotify.c: spelling fix: kenel
tests/qtest/ufs-test.c: spelling fix: tranfer
tests/qtest/migration-test.c: spelling fix: bandwith
target/riscv/cpu.h: spelling fix: separatly
include/hw/virtio/vhost.h: spelling fix: sate
include/hw/hyperv/dynmem-proto.h: spelling fix: nunber, atleast
include/block/ufs.h: spelling fix: setted
hw/net/cadence_gem.c: spelling fixes: Octects
hw/mem/memory-device.c: spelling fix: ontaining
contrib/vhost-user-gpu/virgl.c: spelling fix: mesage
migration/rdma.c: spelling fix: asume
target/hppa: spelling fixes: Indicies, Truely
target/arm/tcg: spelling fixes: alse, addreses
docs/system/arm/emulation.rst: spelling fix: Enhacements
docs/devel/migration.rst: spelling fixes: doen't, diferent, responsability, recomend
docs/about/deprecated.rst: spelling fix: becase
gdbstub: spelling fix: respectivelly
hw/cxl: spelling fixes: limitaions, potentialy, intialized
linux-user: spelling fixes: othe, necesary
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
When shifting the two joined 64-bit registers right, shift the upper
64-bit register to the left and the lower 64-bit register to the right
before merging them with OR.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
|
|
Improve
$ qemu-system-x86_64 -device max-x86_64-cpu,vendor=me
qemu-system-x86_64: -device max-x86_64-cpu,vendor=me: Property '.vendor' doesn't take value 'me'
to
qemu-system-x86_64: -device max-x86_64-cpu,vendor=0123456789abc: value of property 'vendor' must consist of exactly 12 characters
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20231031111059.3407803-8-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[Typo corrected]
|
|
Fixes: 40336d5b1d4c "target/riscv: Add HS-mode virtual interrupt and IRQ filtering support."
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Fixes: bb67ec32a0bb "target/hppa: Include PSW_P in tb flags and mmu index"
Fixes: d7553f3591bb "target/hppa: Populate an interval tree with valid tlb entries"
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|