Age | Commit message (Collapse) | Author |
|
The low bit of MMU indices for x86 TCG indicates whether the processor is
in 32-bit mode and therefore linear addresses have to be masked to 32 bits.
However, the index was computed incorrectly, leading to possible conflicts
in the TLB for any address above 4G.
Analyzed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Fixes: b1661801c18 ("target/i386: Fix physical address truncation", 2024-02-28)
Fixes: a28b6b4e743 ("target/i386: Fix physical address truncation" in stable-8.2)
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2206
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2cc68629a6fc198f4a972698bdd6477f883aedfb)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing
v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
|
|
Accesses from a 32-bit environment (32-bit code segment for instruction
accesses, EFER.LMA==0 for processor accesses) have to mask away the
upper 32 bits of the address. While a bit wasteful, the easiest way
to do so is to use separate MMU indexes. These days, QEMU anyway is
compiled with a fixed value for NB_MMU_MODES. Split MMU_USER_IDX,
MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 90f641531c782c873a05895f411c05fbbbef3c49)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: move changes for x86_cpu_mmu_index() to cpu_mmu_index() due to missing
v8.2.0-1030-gace0c5fe59 "target/i386: Populate CPUClass.mmu_index")
|
|
Remove knowledge of specific MMU indexes (other than MMU_NESTED_IDX and
MMU_PHYS_IDX) from mmu_translate(). This will make it possible to split
32-bit and 64-bit MMU indexes.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 5f97afe2543f09160a8d123ab6e2e8c6d98fa9ce)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: context fixup in target/i386/cpu.h due to other changes in that area)
|
|
While the 8-bit input elements are sequential in the input vector,
the 32-bit output elements are not sequential in the output matrix.
Do not attempt to compute 2 32-bit outputs at the same time.
Cc: qemu-stable@nongnu.org
Fixes: 23a5e3859f5 ("target/arm: Implement SME integer outer product")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2083
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20240305163931.242795-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit d572bcb222010b38b382871a23b2f38e2c3f4d2d)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The A20 mask is only applied to the final memory access. Nested
page tables are always walked with the raw guest-physical address.
Unlike the previous patch, in this one the masking must be kept, but
it was done too early.
Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b5a9de3259f4c791bde2faff086dd5737625e41e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already
applied in get_physical_address(), which is called via probe_access_full()
and x86_cpu_tlb_fill().
If ptw_translate() on the other hand does a MMU_NESTED_IDX access,
the A20 mask must not be applied to the address that is looked up in
the nested page tables; it must be applied only to the addresses that
hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the
previous paragraph).
Therefore, we can remove A20 masking from the computation of the page
table entry's address, and let get_physical_address() or mmu_translate()
apply it when they know they are returning a host-physical address.
Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a28fe7dc1939333c81b895cdced81c69eb7c5ad0)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The address translation logic in get_physical_address() will currently
truncate physical addresses to 32 bits unless long mode is enabled.
This is incorrect when using physical address extensions (PAE) outside
of long mode, with the result that a 32-bit operating system using PAE
to access memory above 4G will experience undefined behaviour.
The truncation code was originally introduced in commit 33dfdb5 ("x86:
only allow real mode to access 32bit without LMA"), where it applied
only to translations performed while paging is disabled (and so cannot
affect guests using PAE).
Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
rearranged the code such that the truncation also applied to the use
of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use
atomic operations for pte updates") brought this truncation into scope
for page table entry accesses, and is the first commit for which a
Windows 10 32-bit guest will reliably fail to boot if memory above 4G
is present.
The truncation code however is not completely redundant. Even though the
maximum address size for any executed instruction is 32 bits, helpers for
operations such as BOUND, FSAVE or XSAVE may ask get_physical_address()
to translate an address outside of the 32-bit range, if invoked with an
argument that is close to the 4G boundary. Likewise for processor
accesses, for example TSS or IDT accesses, when EFER.LMA==0.
So, move the address truncation in get_physical_address() so that it
applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Cc: qemu-stable@nongnu.org
Co-developed-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Michael Brown <mcb30@ipxe.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: drop unrelated change in target/i386/cpu.c)
|
|
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the
maximum physical address width of the processor. Setting them to
1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual).
The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE,
even though the manual is not clear on that.
Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d09c79010ffd880dc69e7a21e3cfdef90b928fb8)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level
paging or PAE paging). Do this in mmu_translate() to remove
the last where get_physical_address() meaningfully drops the high
bits of the address.
Cc: qemu-stable@nongnu.org
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 68fb78d7d5723066ec2cacee7d25d67a4143b42f)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
is_prefix_insn_excp() loads the first word of the instruction address
which caused an exception, to determine whether or not it was prefixed
so the prefix bit can be set in [H]SRR1.
This works if the instruction image can be loaded, but if the exception
was caused by an ifetch, this load could fail and cause a recursive
exception and crash. Machine checks caused by ifetch are not excluded
from the prefix check and can crash (see issue 2108 for an example).
Fix this by excluding machine checks caused by ifetch from the prefix
check.
Cc: qemu-stable@nongnu.org
Acked-by: Cédric Le Goater <clg@kaod.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2108
Fixes: 55a7fa34f89 ("target/ppc: Machine check on invalid real address access on POWER9/10")
Fixes: 5a5d3b23cb2 ("target/ppc: Add SRR1 prefix indication to interrupt handlers")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
(cherry picked from commit c8fd9667e5975fe2e70a906e125a758737eab707)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The move to decodetree flipped the inequality test for the VEC / VSX
MSR facility check.
This caused application crashes under Linux, where these facility
unavailable interrupts are used for lazy-switching of VEC/VSX register
sets. Getting the incorrect interrupt would result in wrong registers
being loaded, potentially overwriting live values and/or exposing
stale ones.
Cc: qemu-stable@nongnu.org
Reported-by: Joel Stanley <joel@jms.id.au>
Fixes: 70426b5bb738 ("target/ppc: moved stxvx and lxvx from legacy to decodtree")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1769
Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Tested-by: Harsh Prateek Bora <harshpb@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
(cherry picked from commit 2cc0e449d17310877fb28a942d4627ad22bb68ea)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
lock prefix
target/i386: As specified by Intel Manual Vol2 3-180, cmp instructions
are not allowed to have lock prefix and a `UD` should be raised. Without
this patch, s1->T0 will be uninitialized and used in the case OP_CMPL.
Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-ID: <20240215095015.570748-2-ziqiaokong@gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 99d0dcd7f102c07a510200d768cae65e5db25d23)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
CPUID leaf 7 was grouped together with SGX leaf 0x12 by commit
b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM") by mistake.
SGX leaf 0x12 has its specific logic to check if subleaf (starting from 2)
is valid or not by checking the bit 0:3 of corresponding EAX is 1 or
not.
Leaf 7 follows the logic that EAX of subleaf 0 enumerates the maximum
valid subleaf.
Fixes: b9edbadefb9e ("i386: Propagate SGX CPUID sub-leafs to KVM")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-ID: <20240125024016.2521244-4-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0729857c707535847d7fe31d3d91eb8b2a118e3c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Existing code misses a decrement of cpuid_i when skip leaf 0x1F.
There's a blank CPUID entry(with leaf, subleaf as 0, and all fields
stuffed 0s) left in the CPUID array.
It conflicts with correct CPUID leaf 0.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by:Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240125024016.2521244-2-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 10f92799af8ba3c3cef2352adcd4780f13fbab31)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
FEAT_XSAVE_XSS_HI leafs
The value of FEAT_XSAVE_XCR0_HI leaf and FEAT_XSAVE_XSS_HI leaf also
need to be masked by XCR0 and XSS mask respectively, to make it
logically correct.
Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-3-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a11a365159b944e05be76f3ec3b98c8b38cb70fd)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Leaf FEAT_XSAVE_XSS_LO and FEAT_XSAVE_XSS_HI also need to be cleared
when CPUID_EXT_XSAVE is not set.
Fixes: 301e90675c3f ("target/i386: Enable support for XSAVES based features")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Reviewed-by: Yang Weijiang <weijiang.yang@intel.com>
Message-ID: <20240115091325.1904229-2-xiaoyao.li@intel.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 81f5cad3858f27623b1b14467926032d229b76cc)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
ARM_FEATURE_PMU
It doesn't make sense to read the value of MDCR_EL2 on a non-A-profile
CPU, and in fact if you try to do it we will assert:
#6 0x00007ffff4b95e96 in __GI___assert_fail
(assertion=0x5555565a8c70 "!arm_feature(env, ARM_FEATURE_M)", file=0x5555565a6e5c "../../target/arm/helper.c", line=12600, function=0x5555565a9560 <__PRETTY_FUNCTION__.0> "arm_security_space_below_el3") at ./assert/assert.c:101
#7 0x0000555555ebf412 in arm_security_space_below_el3 (env=0x555557bc8190) at ../../target/arm/helper.c:12600
#8 0x0000555555ea6f89 in arm_is_el2_enabled (env=0x555557bc8190) at ../../target/arm/cpu.h:2595
#9 0x0000555555ea942f in arm_mdcr_el2_eff (env=0x555557bc8190) at ../../target/arm/internals.h:1512
We might call pmu_counter_enabled() on an M-profile CPU (for example
from the migration pre/post hooks in machine.c); this should always
return false because these CPUs don't set ARM_FEATURE_PMU.
Avoid the assertion by not calling arm_mdcr_el2_eff() before we
have done the early return for "PMU not present".
This fixes an assertion failure if you try to do a loadvm or
savevm for an M-profile board.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2155
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240208153346.970021-1-peter.maydell@linaro.org
(cherry picked from commit ac1d88e9e7ca0bed83e91e07ce6d0597f10cc77d)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The TBI and TCMA bits are located within mtedesc, not desc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 855f94eca80c85a99f459e36684ea2f98f6a3243)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
These functions "use the standard load helpers", but
fail to clean_data_tbi or populate mtedesc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 623507ccfcfebb0f10229ae5de3f85a27fb615a7)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Share code that creates mtedesc and embeds within simd_desc.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 96fcc9982b4aad7aced7fbff046048bbccc6cb0c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
When we added SVE_MTEDESC_SHIFT, we effectively limited the
maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining
bits (32 - 10 - 5 - 12 == 5). Assert that the data to be stored
fits within the field (expecting 8 * 4 - 1 == 31, exact fit).
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit b12a7671b6099a26ce5d5ab09701f151e21c112c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The field is encoded as [0-3], which is convenient for
indexing our array of function pointers, but the true
value is [1-4]. Adjust before calling do_mem_zpa.
Add an assert, and move the comment re passing ZT to
the helper back next to the relevant code.
Cc: qemu-stable@nongnu.org
Fixes: 206adacfb8d ("target/arm: Add mte helpers for sve scalar + int loads")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Gustavo Romero <gustavo.romero@linaro.org>
Message-id: 20240207025210.8837-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 64c6e7444dff64b42d11b836b9aec9acfbe8ecc2)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
In commit 4315f7c614743 we restructured the logic for creating the
VFP related properties to avoid testing the aa32_simd_r32 feature on
AArch64 CPUs. However in the process we accidentally stopped
exposing the "vfp" QOM property on AArch32 TCG CPUs.
This mostly hasn't had any ill effects because not many people want
to disable VFP, but it wasn't intentional. Reinstate the property.
Cc: qemu-stable@nongnu.org
Fixes: 4315f7c614743 ("target/arm: Restructure has_vfp_d32 test")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2098
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240126193432.2210558-1-peter.maydell@linaro.org
(cherry picked from commit 185e3fdf8d106cb2f7d234d5e6453939c66db2a9)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Debug exceptions that target AArch32 Hyp mode are reported differently
than on AAarch64. Internally, Qemu uses the AArch64 syndromes. Therefore
such exceptions need to be either converted to a prefetch abort
(breakpoints, vector catch) or a data abort (watchpoints).
Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Klötzke <jan.kloetzke@kernkonzept.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240127202758.3326381-1-jan.kloetzke@kernkonzept.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit f670be1aad33e801779af580398895b9455747ee)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
A typo in the implementation of isar_feature_aa64_tidcp1() means we
were checking the field in the wrong ID register, so we might have
provided the feature on CPUs that don't have it and not provided
it on CPUs that should have it. Correct this bug.
Cc: qemu-stable@nongnu.org
Fixes: 9cd0c0dec97be9 "target/arm: Implement FEAT_TIDCP1"
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2120
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240123160333.958841-1-peter.maydell@linaro.org
(cherry picked from commit ee0a2e3c9d2991a11c13ffadb15e4d0add43c257)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
In commit 1b7bc9b5c8bf374dd we changed handle_vec_simd_sqshrn() so
that instead of starting with a 0 value and depositing in each new
element from the narrowing operation, it instead started with the raw
result of the narrowing operation of the first element.
This is fine in the vector case, because the deposit operations for
the second and subsequent elements will always overwrite any higher
bits that might have been in the first element's result value in
tcg_rd. However in the scalar case we only go through this loop
once. The effect is that for a signed narrowing operation, if the
result is negative then we will now return a value where the bits
above the first element are incorrectly 1 (because the narrowfn
returns a sign-extended result, not one that is truncated to the
element size).
Fix this by using an extract operation to get exactly the correct
bits of the output of the narrowfn for element 1, instead of a
plain move.
Cc: qemu-stable@nongnu.org
Fixes: 1b7bc9b5c8bf374dd3 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20240123153416.877308-1-peter.maydell@linaro.org
(cherry picked from commit 6fffc8378562c7fea6290c430b4f653f830a4c1a)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
r[id]tlb[01], [iw][id]tlb opcodes use TLB way index passed in a register
by the guest. The host uses 3 bits of the index for ITLB indexing and 4
bits for DTLB, but there's only 7 entries in the ITLB array and 10 in
the DTLB array, so a malicious guest may trigger out-of-bound access to
these arrays.
Change split_tlb_entry_spec return type to bool to indicate whether TLB
way passed to it is valid. Change get_tlb_entry to return NULL in case
invalid TLB way is requested. Add assertion to xtensa_tlb_get_entry that
requested TLB way and entry indices are valid. Add checks to the
[rwi]tlb helpers that requested TLB way is valid and return 0 or do
nothing when it's not.
Cc: qemu-stable@nongnu.org
Fixes: b67ea0cd7441 ("target-xtensa: implement memory protection options")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20231215120307.545381-1-jcmvbkbc@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 604927e357c2b292c70826e4ce42574ad126ef32)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
For PC-relative translation blocks, env->eip changes during the
execution of a translation block, Therefore, QEMU must be able to
recover an instruction's PC just from the TranslationBlock struct and
the instruction data with. Because a TB will not span two pages, QEMU
stores all the low bits of EIP in the instruction data and replaces them
in x86_restore_state_to_opc. Bits 12 and higher (which may vary between
executions of a PCREL TB, since these only use the physical address in
the hash key) are kept unmodified from env->eip. The assumption is that
these bits of EIP, unlike bits 0-11, will not change as the translation
block executes.
Unfortunately, this is incorrect when the CS base is not aligned to a page.
Then the linear address of the instructions (i.e. the one with the
CS base addred) indeed will never span two pages, but bits 12+ of EIP
can actually change. For example, if CS base is 0x80262200 and EIP =
0x6FF4, the first instruction in the translation block will be at linear
address 0x802691F4. Even a very small TB will cross to EIP = 0x7xxx,
while the linear addresses will remain comfortably within a single page.
The fix is simply to use the low bits of the linear address for data[0],
since those don't change. Then x86_restore_state_to_opc uses tb->cs_base
to compute a temporary linear address (referring to some unknown
instruction in the TB, but with the correct values of bits 12 and higher);
the low bits are replaced with data[0], and EIP is obtained by subtracting
again the CS base.
Huge thanks to Mark Cave-Ayland for the image and initial debugging,
and to Gitlab user @kjliew for help with bisecting another occurrence
of (hopefully!) the same bug.
It should be relatively easy to write a testcase that performs MMIO on
an EIP with different bits 12+ than the first instruction of the translation
block; any help is welcome.
Fixes: e3a79e0e878 ("target/i386: Enable TARGET_TB_PCREL", 2022-10-11)
Cc: qemu-stable@nongnu.org
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1759
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1964
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2012
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 729ba8e933f8af5800c3a92b37e630e9bdaa9f1e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The PCREL patches introduced a bug when updating EIP in the !CF_PCREL case.
Using s->pc in func gen_update_eip_next() solves the problem.
Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Signed-off-by: guoguangyao <guoguangyao18@mails.ucas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240115020804.30272-1-guoguangyao18@mails.ucas.ac.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 2926eab8969908bc068629e973062a0fb6ff3759)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
With PCREL, we have a page-relative view of EIP, and an
approximation of PC = EIP+CSBASE that is good enough to
detect page crossings. If we try to recompute PC after
masking EIP, we will mess up that approximation and write
a corrupt value to EIP.
We already handled masking properly for PCREL, so the
fix in b5e0d5d2 was only needed for the !PCREL path.
Cc: qemu-stable@nongnu.org
Fixes: b5e0d5d22fbf ("target/i386: Fix 32-bit wrapping of pc/eip computation")
Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20240101230617.129349-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit a58506b748b8988a95f4fa1a2420ac5c17038b30)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Put correct values (depending on CPU arch) into IOR and ISR on fault.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 31efbe72c6cc54b9cbc2505d78870a8a87a8d392)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Put correct values (depending on CPU arch) into IOR and ISR on fault.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 910ada0225d17530188aa45afcb9412c17267f46)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Move functionality to set IOR and ISR on fault into own
function. This will be used by follow-up patches.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 3824e0d643f34ee09e0cc75190c0c4b60928b78c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The value of unwind_breg may reference register %r0, but we need to avoid
accessing gr0 directly and use the value 0 instead.
At runtime I've seen unwind_breg being zero with the Linux kernel when
rfi is used to jump to smp_callin().
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Bruno Haible <bruno@clisp.org>
(cherry picked from commit 5915b67013eb8c3a84e3ef05e6ba4eae55ccd173)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Fix the address translation for PDC space on PA2.0 if PSW.W=0.
Basically, for any address in the 32-bit PDC range from 0xf0000000 to
0xf1000000 keep the lower 32-bits and just set the upper 32-bits to
0xfffffff0.
This mapping fixes the emulated power button in PDC space for 32- and
64-bit machines and is how the physical C3700 machine seems to map
PDC.
Figures H-10 and H-11 in the parisc2.0 spec [1] show that the 32-bit
region will be mapped somewhere into a higher and bigger 64-bit PDC
space. The start and end of this 64-bit space is defined by the
physical address bits. But the figures don't specifiy where exactly the
mapping will start inside that region. Tests on a real HP C3700
regarding the address of the power button indicate, that the lower
32-bits will stay the same though.
[1] https://parisc.wiki.kernel.org/images-parisc/7/73/Parisc2.0.pdf
Signed-off-by: Helge Deller <deller@gmx.de>
Tested-by: Bruno Haible <bruno@clisp.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 6ce18d530638f6e4eb87ef8737c634e34362ad2b)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
LAE should set the access register corresponding to the first operand,
instead, it always modifies access register 1.
Co-developed-by: Ido Plat <Ido.Plat@ibm.com>
Cc: qemu-stable@nongnu.org
Fixes: a1c7610a6879 ("target-s390x: implement LAY and LAEY instructions")
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-ID: <20240111092328.929421-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit e358a25a97c71c39e3513d9b869cdb82052e50b8)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
The mcycle/minstret counter's stop flag is mistakenly updated on a copy
on stack. Thus the counter increments even when the CY/IR bit in the
mcountinhibit register is set. This commit corrects its behavior.
Fixes: 3780e33732f88 (target/riscv: Support mcycle/minstret write operation)
Signed-off-by: Xu Lu <luxu.kernel@bytedance.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 5cb0e7abe1635cb82e0033260dac2b910d142f8c)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
strerrorname_np is non-portable and breaks building with musl libc.
Use strerror(errno) instead, like we do other places.
Cc: qemu-stable@nongnu.org
Fixes: commit 082e9e4a58ba (target/riscv/kvm: improve 'init_multiext_cfg' error msg)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2041
Buglink: https://gitlab.alpinelinux.org/alpine/aports/-/issues/15541
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit d424db235434b8356c6b2d9420b846c7ddcc83ea)
|
|
In 32-bit mode, pc = eip + cs_base is also 32-bit, and must wrap.
Failure to do so results in incorrect memory exceptions to the guest.
Before 732d548732ed, this was implicitly done via truncation to
target_ulong but only in qemu-system-i386, not qemu-system-x86_64.
To fix this, we must add conditional zero-extensions.
Since we have to test for 32 vs 64-bit anyway, note that cs_base
is always zero in 64-bit mode.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2022
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20231212172510.103305-1-richard.henderson@linaro.org>
|
|
Commit 7191f24c7fcf ("accel/kvm/kvm-all: Handle register access errors")
added error checking for KVM_SET_SREGS/KVM_SET_SREGS2. In doing so, it
exposed a long-running bug in current KVM support for SEV-ES where the
kernel assumes that MSR_EFER_LMA will be set explicitly by the guest
kernel, in which case EFER write traps would result in KVM eventually
seeing MSR_EFER_LMA get set and recording it in such a way that it would
be subsequently visible when accessing it via KVM_GET_SREGS/etc.
However, guest kernels currently rely on MSR_EFER_LMA getting set
automatically when MSR_EFER_LME is set and paging is enabled via
CR0_PG_MASK. As a result, the EFER write traps don't actually expose the
MSR_EFER_LMA bit, even though it is set internally, and when QEMU
subsequently tries to pass this EFER value back to KVM via
KVM_SET_SREGS* it will fail various sanity checks and return -EINVAL,
which is now considered fatal due to the aforementioned QEMU commit.
This can be addressed by inferring the MSR_EFER_LMA bit being set when
paging is enabled and MSR_EFER_LME is set, and synthesizing it to ensure
the expected bits are all present in subsequent handling on the host
side.
Ultimately, this handling will be implemented in the host kernel, but to
avoid breaking QEMU's SEV-ES support when using older host kernels, the
same handling can be done in QEMU just after fetching the register
values via KVM_GET_SREGS*. Implement that here.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
Cc: Lara Lazier <laramglazier@gmail.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc: <kvm@vger.kernel.org>
Fixes: 7191f24c7fcf ("accel/kvm/kvm-all: Handle register access errors")
Signed-off-by: Michael Roth <michael.roth@amd.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20231206155821.1194551-1-michael.roth@amd.com>
|
|
Misc fixes for 8.2
- memory: Avoid unaligned accesses (Patrick)
- target/riscv: Fix variable shadowing (Daniel)
- tests/avocado: Update URL, skip flaky test (Alex, Phil)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVt7w4ACgkQ4+MsLN6t
# wN7AzA/+N1ec1I2IC0LYT8ThBCyV/92o+1FH5KaIxiVN5Ty3cACnGAac8IJ1K2tC
# 5WG8Dxg4rq+bM16lUq9ME7k82Y3PoxLoRQa8fuClFKdHMXU2sgY4OwTx2606cO4B
# 0H4gR+i3XgrgIDo8qRezWX0JSd1Srrz9QPlcq6kJfDtRq7DU0329aOobkyzuUJPb
# DJD9YPu9y0KokBCBuVlt5ypNSM9xJGRtznFt1sFfNyPgNOnie3s+fYpPn26UigcY
# 8OY/PMS5hapDw/s/gFKWQb/nCTSRnJKZ5dODOjHXK8HvTbYHedw8C4apXyjSXwBI
# fBerNEKJHwD/1QkFhbNCIwsidH72BWeHljrelbZlsUfXU1VcCqNiXV8d2R9ak3xt
# lrW7UcytC1+PqhmIVEXAOA4cwJcq6Hs9IcZ9G9aMvcmlhY3Fv0UKMoRYgsGTULeP
# ySQF3FDSc6dldsfBdwqHGHlwL12EYmpN2sIhEc5aQ9y5Mmuj2FBOYHUbPpyoVLpw
# e45n9Epc43GUVCMj3lZNjWKd+87nGjwKOM3rpBlcaSG3JRkLUe8o5APxeYzePZfO
# 2IOWcGnrUixsvWbNY+6JF98n5hQK7Va3h/e1YbS1K2OZ013LT1SNiZ1LOrl9KiXf
# agY6HMukKbCpJAqpXnMbgWdxCb7GdtjVaWKIVEgejeZTVdH4f8Y=
# =r21r
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Dec 2023 10:23:58 EST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'misc-fixes-20231204' of https://github.com/philmd/qemu:
tests/avocado: mark ReplayKernelNormal.test_mips64el_malta as flaky
tests/avocado: Update yamon-bin-02.22.zip URL
target/riscv/kvm: fix shadowing in kvm_riscv_(get|put)_regs_csr
system/memory: use ldn_he_p/stn_he_p
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Turn off SME if SVE is turned off (this combination doesn't
currently work and QEMU will assert if you try it)
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmVt3wQZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vrmD/9zu48IxCdHFSshMRmXz6kI
# tMvTrsMSOGXfuQqCbvLn3CUP/La50Yt/T1C2TKzVII1W8zpw8wEEvraCBjexzUzK
# Jcjw0dPSIllQOHBkoUGsgqA0+UkhfIwH0po10rxm1L+ZP3DfISVdyDV9oxCNfEO0
# pGXI1eAN9GIQtJtUj3kZE+RUoamJfoSjlm5XVeX3T+utEU7yf1461L1/qaylYOrW
# wao72ffbuf41jRJwnVmMFoIPrwueYtEeuKl/EgYU4YPxkSQEo34u6d9fz2Irt6/Y
# utO2SffhhmlxQaFhgPX3hvAsfapMt/p2Jy6oUpThOjN75adCq+g1CYj7lzEfIX16
# kb2CY8zQ8NboJtgnkiQAA062myURnk/kmulv0OF6Hh0jHSuLzuMMLcCfBJgq4H6s
# mnBCJfetwRgwqcSl1JTfrMm4wYOLmSrmOcM5JjYwY2YYjnFXI+XB1MdKm0h8cROG
# nFu5TZtNnxgzqBgoh1140AYN851Y1dshczZIHb1/YuNpBIl+ZUO4v5sRT3KBSzb+
# G21570neBv8QcfDSgrLesrjNBDREfkaWEu9BM85461uTjbCLG8RUpn+Jd4VtpkNe
# YVzomhuM9CI5CmYdrTMJ74gnZUtAT9Q3FTcfGL8G4KiSIe85BTw+gEy4PhLXD6FT
# 68fP1M+s8/hsuXCJYbvmAA==
# =K/u0
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 04 Dec 2023 09:15:32 EST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20231204-1' of https://git.linaro.org/people/pmaydell/qemu-arm:
target/arm: Disable SME if SVE is disabled
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() use an 'int ret' variable
that is used to do an early 'return' if ret > 0. Both are being called
in functions that are also declaring a 'ret' integer, initialized with
'0', and this integer is used as return of the function.
The result is that the compiler is less than pleased and is pointing
shadowing errors:
../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_get_regs_csr':
../target/riscv/kvm/kvm-cpu.c:90:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
90 | int ret = kvm_get_one_reg(cs, RISCV_CSR_REG(env, csr), ®); \
| ^~~
../target/riscv/kvm/kvm-cpu.c:539:5: note: in expansion of macro 'KVM_RISCV_GET_CSR'
539 | KVM_RISCV_GET_CSR(cs, env, sstatus, env->mstatus);
| ^~~~~~~~~~~~~~~~~
../target/riscv/kvm/kvm-cpu.c:536:9: note: shadowed declaration is here
536 | int ret = 0;
| ^~~
../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_put_regs_csr':
../target/riscv/kvm/kvm-cpu.c:98:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
98 | int ret = kvm_set_one_reg(cs, RISCV_CSR_REG(env, csr), ®); \
| ^~~
../target/riscv/kvm/kvm-cpu.c:556:5: note: in expansion of macro 'KVM_RISCV_SET_CSR'
556 | KVM_RISCV_SET_CSR(cs, env, sstatus, env->mstatus);
| ^~~~~~~~~~~~~~~~~
../target/riscv/kvm/kvm-cpu.c:553:9: note: shadowed declaration is here
553 | int ret = 0;
| ^~~
The macros are doing early returns for non-zero returns and the local
'ret' variable for both functions is used just to do 'return 0', so
remove them from kvm_riscv_get_regs_csr() and kvm_riscv_put_regs_csr()
and do a straight 'return 0' in the end.
For good measure let's also rename the 'ret' variables in
KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() to '_ret' to make them more
resilient to these kind of errors.
Fixes: 937f0b4512 ("target/riscv: Implement kvm_arch_get_registers")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20231123101338.1040134-1-dbarboza@ventanamicro.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
Replaces TABS with spaces to ensure have a consistent coding
style with an indentation of 4 spaces in the SH4 subsystem.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/376
Signed-off-by: Yihuan Pan <xun794@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20231124044554.513752-1-xun794@gmail.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
|
There is no architectural requirement that SME implies SVE, but
our implementation currently assumes it. (FEAT_SME_FA64 does
imply SVE.) So if you try to run a CPU with eg "-cpu max,sve=off"
you quickly run into an assert when the guest tries to write to
SMCR_EL1:
#6 0x00007ffff4b38e96 in __GI___assert_fail
(assertion=0x5555566e69cb "sm", file=0x5555566e5b24 "../../target/arm/helper.c", line=6865, function=0x5555566e82f0 <__PRETTY_FUNCTION__.31> "sve_vqm1_for_el_sm") at ./assert/assert.c:101
#7 0x0000555555ee33aa in sve_vqm1_for_el_sm (env=0x555557d291f0, el=2, sm=false) at ../../target/arm/helper.c:6865
#8 0x0000555555ee3407 in sve_vqm1_for_el (env=0x555557d291f0, el=2) at ../../target/arm/helper.c:6871
#9 0x0000555555ee3724 in smcr_write (env=0x555557d291f0, ri=0x555557da23b0, value=2147483663) at ../../target/arm/helper.c:6995
#10 0x0000555555fd1dba in helper_set_cp_reg64 (env=0x555557d291f0, rip=0x555557da23b0, value=2147483663) at ../../target/arm/tcg/op_helper.c:839
#11 0x00007fff60056781 in code_gen_buffer ()
Avoid this unsupported and slightly odd combination by
disabling SME when SVE is not present.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2005
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231127173318.674758-1-peter.maydell@linaro.org
|
|
Misc fixes for 8.2
* buildsys: Invoke bash via 'env' (Samuel)
* doc: Fix example in s390-cpu-topology.rst (Zhao)
* HW: Fix AVR ATMega reset stack (Gihun) and VT82C686 IRQ routing (Zoltan)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmVl7MUACgkQ4+MsLN6t
# wN4nsQ//U7/GGrMaNJF369pC0UfC0dfD39RoD9jmmrWUQB17baMvXo+BMBcELX0Q
# BtgRjIYwnywnVZlB11JL5Ql9ykSRqd7VeqnZfH//GqQO+ySF7jl6ekNT6YNjUbWu
# iF9bU3o0/LAVl/3pe9LQ4q/yOjzERA5o4JKYviHZYcWE811/5lBNgER4iPyCz6a8
# aGI3S5PGmq6a9x5266jkY2WWldDy7D1ujkuvxxc4tgnmbBjL21soJ/oRLOBjGTNl
# hCRfDTEiFZm7OxjV7oB03Nr3EGGStGdy0aPhhtFwzZxQ9yV7d2DLsbYGgwzZYkKQ
# 9v4DtGqYyvDA7LBmfxOrnzL0WXgN4xO3qekLqHDtChDzFFEYwtHvH0duPUiQv1Yu
# qHyOsfB58rKzWHeo0ACEjMWGdD1opCXCeoJlEf/saiQ5EgyBwph/z2mWYN4yak5H
# Zu3xF15BcnyavC6sVeuE+rT574dhCzOtH8Vf3WVwqfL5D5cyCjHlmPSAXXMqBkmh
# BMOD8O210n6IdzuuOQ038t3yGvIc0YysOmQgfLjRYlZa884q3wExgrufH+NYbGMj
# bFthPjLKgHm+q4k2mH65G98xwXQFT6rdHanw2iEJcPJbhhk9SNWYgaQ0r0Oi2Pfd
# zCQ22F1j9UqGcqKh+8tzAfjayRyQUJtgizPXEWanADkpIDYxrRk=
# =323/
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 28 Nov 2023 08:36:05 EST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'misc-next-20231128' of https://github.com/philmd/qemu:
docs/s390: Fix wrong command example in s390-cpu-topology.rst
hw/avr/atmega: Fix wrong initial value of stack pointer
hw/audio/via-ac97: Route interrupts using via_isa_set_irq()
hw/isa/vt82c686: Route PIRQ inputs using via_isa_set_irq()
hw/usb/vt82c686-uhci-pci: Use ISA instead of PCI interrupts
hw/isa/vt82c686: Bring back via_isa_set_irq()
target/hexagon/idef-parser/prepare: use env to invoke bash
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The current implementation initializes the stack pointer of AVR devices
to 0. Although older AVR devices used to be like that, newer ones set
it to RAMEND.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1525
Signed-off-by: Gihun Nam <gihun.nam@outlook.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <PH0P222MB0010877445B594724D40C924DEBDA@PH0P222MB0010.NAMP222.PROD.OUTLOOK.COM>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
This file is the only one involved in the compilation process which
still uses the /bin/bash path.
Signed-off-by: Samuel Tardieu <sam@rfc1149.net>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Message-ID: <20231123211506.636533-1-sam@rfc1149.net>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
|
|
In commit edac4d8a168 back in 2015 when we added support for
the virtual timer offset CNTVOFF_EL2, we didn't correctly update
the timer-recalculation code that figures out when the timer
interrupt is next going to change state. We got it wrong in
two ways:
* for the 0->1 transition, we didn't notice that gt->cval + offset
can overflow a uint64_t
* for the 1->0 transition, we didn't notice that the transition
might now happen before the count rolls over, if offset > count
In the former case, we end up trying to set the next interrupt
for a time in the past, which results in QEMU hanging as the
timer fires continuously.
In the latter case, we would fail to update the interrupt
status when we are supposed to.
Fix the calculations in both cases.
The test case is Alex Bennée's from the bug report, and tests
the 0->1 transition overflow case.
Fixes: edac4d8a168 ("target-arm: Add CNTVOFF_EL2")
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/60
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231120173506.3729884-1-peter.maydell@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|
|
The syndrome register value always has an IL field at bit 25, which
is 0 for a trap on a 16 bit instruction, and 1 for a trap on a 32
bit instruction (or for exceptions which aren't traps on a known
instruction, like PC alignment faults). This means that our
syn_*() functions should always either take an is_16bit argument to
determine whether to set the IL bit, or else unconditionally set it.
We missed setting the IL bit for the syndrome for three kinds of trap:
* an SVE access exception
* a pointer authentication check failure
* a BTI (branch target identification) check failure
All of these traps are AArch64 only, and so the instruction causing
the trap is always 64 bit. This means we can unconditionally set
the IL bit in the syn_*() function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20231120150121.3458408-1-peter.maydell@linaro.org
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
|