aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2021-12-17target/ppc: Add helpers for fadds, fsubs, fdivsRichard Henderson
Use float64r32_{add,sub,div}. Fixes a double-rounding issue with performing the compuation in float64 and then rounding afterward. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-31-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Add helper for fsqrtsRichard Henderson
Use float64r32_sqrt. Fixes a double-rounding issue with performing the compuation in float64 and then rounding afterward. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-30-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Add helpers for fmadds et alRichard Henderson
Use float64r32_muladd. Fixes a double-rounding issue with performing the compuation in float64 and then rounding afterward. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-29-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update fre to new flagsRichard Henderson
Use float_flag_invalid_snan instead of recomputing the snan-ness of the operand. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-27-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update xsrqpi and xsrqpxp to new flagsRichard Henderson
Use float_flag_invalid_snan instead of recomputing the snan-ness of the operand. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-26-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update sqrt for new flagsRichard Henderson
Now that vxsqrt and vxsnan are computed directly by softfloat, we don't need to recompute it. Split out float_invalid_op_sqrt to be used in several places. This fixes VSX_SQRT, which did not order its tests correctly to eliminate NaN with sign set. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-25-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Use helper_todouble in do_frspRichard Henderson
We only needed one ieee arithmetic operation to raise exceptions. To convert back to register form, we can use our simpler non-arithmetic function. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-24-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update do_frsp for new flagsRichard Henderson
Now that vxsnan is computed directly by softfloat, we don't need to recompute it. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-23-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Split out do_frspRichard Henderson
Calling helper_frsp directly from other helpers generates the incorrect retaddr. Split out a helper that takes the retaddr as a parameter. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-22-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Do not call do_float_check_status from do_fmaddRichard Henderson
We will process flags other than in valid in helper_float_check_status, which is invoked after the writeback to FRT. Fixes a bug in which FRT is not written when OE/UE/XE are enabled. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-21-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Split out do_fmaddRichard Henderson
Create a common function for all of the madd helpers. Let the compiler tail call or inline as it chooses. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-20-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update fmadd for new flagsRichard Henderson
Now that vximz, vxisi, and vxsnan are computed directly by softfloat, we don't need to recompute it. This replaces the separate float{32,64}_maddsub_update_excp functions with a single float_invalid_op_madd function. Fix VSX_MADD by passing sfprf to float_invalid_op_madd, whereas the previous *_maddsub_update_excp assumed it true. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-19-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Clean up do_friRichard Henderson
Let float64_round_to_int detect and silence snans. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-18-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Tidy inexact handling in do_friRichard Henderson
In GEN_FLOAT_B, we called helper_reset_fpstatus immediately before calling helper_fri*. Therefore get_float_exception_flags is known to be zero, and this code can be simplified. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-17-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Use FloatRoundMode in do_friRichard Henderson
This is the proper type for the enumeration. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20211119160502.17432-16-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Remove inline from do_friRichard Henderson
There's no reason the callers can't tail call to one function. Leave it up to the compiler either way. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20211119160502.17432-15-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Fix VXCVI return valueRichard Henderson
We were returning nanval for any instance of invalid being set, but that is an incorrect for VXCVI. This failure can be seen in the float_convs tests. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-14-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update float_invalid_cvt for new flagsRichard Henderson
Now that vxsnan is computed directly by softfloat, we don't need to recompute it via classes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-13-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Move float_check_status from FPU_FCTI to translateRichard Henderson
Fixes a bug in which e.g XE enabled causes inexact to be raised before the writeback to the architectural register. All of the users of GEN_FLOAT_B either set set_fprf, or are one of the convert-to-integer instructions that require this behaviour. Split out the two gen_helper_* calls in gen_compute_fprf_float64 and protect only the first with set_fprf. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-12-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update float_invalid_op_div for new flagsRichard Henderson
Now that vxidi, vxzdz, and vxsnan are computed directly by softfloat, we don't need to recompute it via classes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-11-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update float_invalid_op_mul for new flagsRichard Henderson
Now that vximz and vxsnan are computed directly by softfloat, we don't need to recompute it via classes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-10-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Update float_invalid_op_addsub for new flagsRichard Henderson
Now that vxisi and vxsnan are computed directly by softfloat, we don't need to recompute it via classes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211119160502.17432-9-richard.henderson@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Implement Vector Mask Move insnsMatheus Ferst
Implement the following PowerISA v3.1 instructions: mtvsrbm: Move to VSR Byte Mask mtvsrhm: Move to VSR Halfword Mask mtvsrwm: Move to VSR Word Mask mtvsrdm: Move to VSR Doubleword Mask mtvsrqm: Move to VSR Quadword Mask mtvsrbmi: Move to VSR Byte Mask Immediate Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20211203194229.746275-4-matheus.ferst@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Implement Vector Extract MaskMatheus Ferst
Implement the following PowerISA v3.1 instructions: vextractbm: Vector Extract Byte Mask vextracthm: Vector Extract Halfword Mask vextractwm: Vector Extract Word Mask vextractdm: Vector Extract Doubleword Mask vextractqm: Vector Extract Quadword Mask Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211203194229.746275-3-matheus.ferst@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Implement Vector Expand MaskMatheus Ferst
Implement the following PowerISA v3.1 instructions: vexpandbm: Vector Expand Byte Mask vexpandhm: Vector Expand Halfword Mask vexpandwm: Vector Expand Word Mask vexpanddm: Vector Expand Doubleword Mask vexpandqm: Vector Expand Quadword Mask Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br> Message-Id: <20211203194229.746275-2-matheus.ferst@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: ppc_store_fpscr doesn't update bits 0 to 28 and 52Lucas Mateus Castro (alqotel)
This commit fixes the difference reported in the bug in the reserved bit 52, it does this by adding this bit to the mask of bits to not be directly altered in the ppc_store_fpscr function (the hardware used to compare to QEMU was a Power9). The bits 0 to 27 were also added to the mask, as they are marked as reserved in the PowerISA and bit 28 is a reserved extension of the DRN field (bits 29:31) but can't be set using mtfsfi, while the other DRN bits may be set using mtfsfi instruction, so bit 28 was also added to the mask. Although this is a difference reported in the bug, since it's a reserved bit it may be a "don't care" case, as put in the bug report. Looking at the ISA it doesn't explicitly mention this bit can't be set, like it does for FEX and VX, so I'm unsure if this is necessary. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/266 Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20211201163808.440385-4-lucas.araujo@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/ppc: Fixed call to deferred exceptionLucas Mateus Castro (alqotel)
mtfsf, mtfsfi and mtfsb1 instructions call helper_float_check_status after updating the value of FPSCR, but helper_float_check_status checks fp_status and fp_status isn't updated based on FPSCR and since the value of fp_status is reset earlier in the instruction, it's always 0. Because of this helper_float_check_status would change the FI bit to 0 as this bit checks if the last operation was inexact and float_flag_inexact is always 0. These instructions also don't throw exceptions correctly since helper_float_check_status throw exceptions based on fp_status. This commit created a new helper, helper_fpscr_check_status that checks FPSCR value instead of fp_status and checks for a larger variety of exceptions than do_float_check_status. Since fp_status isn't used, gen_reset_fpstatus() was removed. The hardware used to compare QEMU's behavior to was a Power9. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br> Message-Id: <20211201163808.440385-2-lucas.araujo@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-12-17target/i386/kvm: Replace use of __u32 typePhilippe Mathieu-Daudé
QEMU coding style mandates to not use Linux kernel internal types for scalars types. Replace __u32 by uint32_t. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211116193955.2793171-1-philmd@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-12-17s390: kvm: adjust diag318 resets to retain dataCollin Walling
The CPNC portion of the diag318 data is erroneously reset during an initial CPU reset caused by SIGP. Let's go ahead and relocate the diag318_info field within the CPUS390XState struct such that it is only zeroed during a clear reset. This way, the CPNC will be retained for each VCPU in the configuration after the diag318 instruction has been invoked. The s390_machine_reset code already takes care of zeroing the diag318 data on VM resets, which also cover resets caused by diag308. Fixes: fabdada9357b ("s390: guest support for diagnose 0x318") Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Collin Walling <walling@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Message-Id: <20211117152303.627969-1-walling@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-12-15target/arm: Correct calculation of tlb range invalidate lengthPeter Maydell
The calculation of the length of TLB range invalidate operations in tlbi_aa64_range_get_length() is incorrect in two ways: * the NUM field is 5 bits, but we read only 4 bits * we miscalculate the page_shift value, because of an off-by-one error: TG 0b00 is invalid TG 0b01 is 4K granule size == 4096 == 2^12 TG 0b10 is 16K granule size == 16384 == 2^14 TG 0b11 is 64K granule size == 65536 == 2^16 so page_shift should be (TG - 1) * 2 + 12 Thanks to the bug report submitter Cha HyunSoo for identifying both these errors. Fixes: 84940ed82552d3c ("target/arm: Add support for FEAT_TLBIRANGE") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/734 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20211130173257.1274194-1-peter.maydell@linaro.org
2021-12-15target/rx/cpu.h: Don't include qemu-common.hPeter Maydell
The qemu-common.h header is not supposed to be included from any other header files, only from .c files (as documented in a comment at the start of it). Nothing actually relies on target/rx/cpu.h including it, so we can just drop the include. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Taylor Simpson <tsimpson@quicinc.com> Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp> Message-id: 20211129200510.1233037-4-peter.maydell@linaro.org
2021-12-15target/hexagon/cpu.h: don't include qemu-common.hPeter Maydell
The qemu-common.h header is not supposed to be included from any other header files, only from .c files (as documented in a comment at the start of it). Move the include to linux-user/hexagon/cpu_loop.c, which needs it for the declaration of cpu_exec_step_atomic(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Taylor Simpson <tsimpson@quicinc.com> Message-id: 20211129200510.1233037-3-peter.maydell@linaro.org
2021-12-15target/i386: Use assert() to sanity-check b1 in SSE decodePeter Maydell
In the SSE decode function gen_sse(), we combine a byte 'b' and a value 'b1' which can be [0..3], and switch on them: b |= (b1 << 8); switch (b) { ... default: unknown_op: gen_unknown_opcode(env, s); return; } In three cases inside this switch, we were then also checking for "if (b1 >= 2) { goto unknown_op; }". However, this can never happen, because the 'case' values in each place are 0x0nn or 0x1nn and the switch will have directed the b1 == (2, 3) cases to the default already. This check was added in commit c045af25a52e9 in 2010; the added code was unnecessary then as well, and was apparently intended only to ensure that we never accidentally ended up indexing off the end of an sse_op_table with only 2 entries as a result of future bugs in the decode logic. Change the checks to assert() instead, and make sure they're always immediately before the array access they are protecting. Fixes: Coverity CID 1460207 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-12-15target/arm: Suppress bp for exceptions with more priorityRichard Henderson
Both single-step and pc alignment faults have priority over breakpoint exceptions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Assert thumb pc is alignedRichard Henderson
Misaligned thumb PC is architecturally impossible. Assert is better than proceeding, in case we've missed something somewhere. Expand a comment about aligning the pc in gdbstub. Fail an incoming migrate if a thumb pc is misaligned. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Take an exception if PC is misalignedRichard Henderson
For A64, any input to an indirect branch can cause this. For A32, many indirect branch paths force the branch to be aligned, but BXWritePC does not. This includes the BX instruction but also other interworking changes to PC. Prior to v8, this case is UNDEFINED. With v8, this is CONSTRAINED UNPREDICTABLE and may either raise an exception or force align the PC. We choose to raise an exception because we have the infrastructure, it makes the generated code for gen_bx simpler, and it has the possibility of catching more guest bugs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Split compute_fsr_fsc out of arm_deliver_faultRichard Henderson
We will reuse this section of arm_deliver_fault for raising pc alignment faults. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Advance pc for arch single-step exceptionRichard Henderson
The size of the code covered by a TranslationBlock cannot be 0; this is checked via assert in tb_gen_code. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Split arm_pre_translate_insnRichard Henderson
Create arm_check_ss_active and arm_check_kernelpage. Reverse the order of the tests. While it doesn't matter in practice, because only user-only has a kernel page and user-only never sets ss_active, ss_active has priority over execution exceptions and it is best to keep them in the proper order. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Hoist pc_next to a local variable in thumb_tr_translate_insnRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Hoist pc_next to a local variable in arm_tr_translate_insnRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-12-15target/arm: Hoist pc_next to a local variable in aarch64_tr_translate_insnRichard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-11-29target/ppc: fix Hash64 MMU update of PTE bit RLeandro Lupori
When updating the R bit of a PTE, the Hash64 MMU was using a wrong byte offset, causing the first byte of the adjacent PTE to be corrupted. This caused a panic when booting FreeBSD, using the Hash MMU. Fixes: a2dd4e83e76b ("ppc/hash64: Rework R and C bit updates") Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br> Signed-off-by: Cédric Le Goater <clg@kaod.org>
2021-11-22Revert "arm: tcg: Adhere to SMCCC 1.3 section 5.2"Peter Maydell
This reverts commit 9fcd15b9193e819b6cc2fd0a45e3506148812bb4. This change turns out to cause regressions, for instance on the imx6ul boards as described here: https://lore.kernel.org/qemu-devel/c8b89685-7490-328b-51a3-48711c140a84@tribudubois.net/ The primary cause of that regression is that the guest code running at EL3 expects SMCs (not related to PSCI) to do what they would if our PSCI emulation was not present at all, but after this change they instead set a value in R0/X0 and continue. We could fix that by a refactoring that allowed us to only turn on the PSCI emulation if we weren't booting the guest at EL3, but there is a more tangled problem with the highbank board, which: (1) wants to enable PSCI emulation (2) has a bit of guest code that it wants to run at EL3 and to perform SMC calls that trap to the monitor vector table: this is the boot stub code that is written to memory by arm_write_secure_board_setup_dummy_smc() and which the highbank board enables by setting bootinfo->secure_board_setup We can't satisfy both of those and also have the PSCI emulation handle all SMC instruction executions regardless of function identifier value. This is too tricky to try to sort out before 6.2 is released; revert this commit so we can take the time to get it right in the 7.0 release. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20211119163419.557623-1-peter.maydell@linaro.org
2021-11-19Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into stagingRichard Henderson
Bugfixes for 6.2. # gpg: Signature made Fri 19 Nov 2021 10:33:29 AM CET # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] * tag 'for-upstream' of https://gitlab.com/bonzini/qemu: chardev/wctable: don't free the instance in wctablet_chr_finalize meson.build: Support ncurses on MacOS and OpenBSD docs: Spell QEMU all caps qtest/am53c974-test: add test for reset before transfer esp: ensure that async_len is reset to 0 during esp_hard_reset() nvmm: Fix support for stable version meson: fix botched compile check conversions Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-11-19nvmm: Fix support for stable versionnia
NVMM user version 1 is the version being shipped with netbsd-9, which is the most recent stable branch of NetBSD. This makes it possible to use the NVMM accelerator on the most recent NetBSD release, 9.2, which lacks nvmm_cpu_stop. (CC'ing maintainers) Signed-off-by: Nia Alarie <nia@NetBSD.org> Reviewed-by: Kamil Rytarowski <kamil@netbsd.org> Message-Id: <YWblCe2J8GwCaV9U@homeworld.netbsd.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-11-18target/i386/sev: Replace qemu_map_ram_ptr with address_space_mapDov Murik
Use address_space_map/unmap and check for errors. Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Acked-by: Brijesh Singh <brijesh.singh@amd.com> [Two lines wrapped for length - Daniel] Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-11-18target/i386/sev: Perform padding calculations at compile-timeDov Murik
In sev_add_kernel_loader_hashes, the sizes of structs are known at compile-time, so calculate needed padding at compile-time. No functional change intended. Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-11-18target/i386/sev: Fail when invalid hashes table area detectedDov Murik
Commit cff03145ed3c ("sev/i386: Introduce sev_add_kernel_loader_hashes for measured linux boot", 2021-09-30) introduced measured direct boot with -kernel, using an OVMF-designated hashes table which QEMU fills. However, no checks are performed on the validity of the hashes area designated by OVMF. Specifically, if OVMF publishes the SEV_HASH_TABLE_RV_GUID entry but it is filled with zeroes, this will cause QEMU to write the hashes entries over the first page of the guest's memory (GPA 0). Add validity checks to the published area. If the hashes table area's base address is zero, or its size is too small to fit the aligned hashes table, display an error and stop the guest launch. In such case, the following error will be displayed: qemu-system-x86_64: SEV: guest firmware hashes table area is invalid (base=0x0 size=0x0) Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Reported-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
2021-11-18target/i386/sev: Rephrase error message when no hashes table in guest firmwareDov Murik
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com> Acked-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>