aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2018-05-01tcg: Improve TCGv_ptr supportRichard Henderson
Drop TCGV_PTR_TO_NAT and TCGV_NAT_TO_PTR internal macros. Add tcg_temp_local_new_ptr, tcg_gen_brcondi_ptr, tcg_gen_ext_i32_ptr, tcg_gen_trunc_i64_ptr, tcg_gen_extu_ptr_i64, tcg_gen_trunc_ptr_i32. Use inlines instead of macros where possible. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-01Merge remote-tracking branch ↵Peter Maydell
'remotes/vivier/tags/m68k-for-2.13-pull-request' into staging # gpg: Signature made Tue 01 May 2018 14:53:58 BST # gpg: using RSA key F30C38BD3F2FBE3C # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" # gpg: aka "Laurent Vivier <laurent@vivier.eu>" # gpg: aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F 5173 F30C 38BD 3F2F BE3C * remotes/vivier/tags/m68k-for-2.13-pull-request: hw/m68k/mcf5208: Fix trivial typo in board description m68k: remove dead code (Coverity CID1390617) m68k: Fix floatx80_lognp1 (Coverity CID1390587) m68k: fix subx mem, mem instruction Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-05-01m68k: remove dead code (Coverity CID1390617)Laurent Vivier
floatx80_sin() and floatx80_cos() are derived from one sincos() function. They have both unused code coming from their common origin. Remove it. Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180430170156.1860-2-laurent@vivier.eu>
2018-05-01m68k: Fix floatx80_lognp1 (Coverity CID1390587)Laurent Vivier
return the result of packFloatx80() instead of dropping it. Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180430170156.1860-1-laurent@vivier.eu>
2018-04-30target-microblaze: mmu: Make the TLBX MISS bit read-onlyEdgar E. Iglesias
Make the TLBX MISS bit read-only. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2018-04-30target-microblaze: mmu: Make TLBSX write-onlyEdgar E. Iglesias
Make TLBSX write-only and guest-error log reads from it. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2018-04-30target-microblaze: Don't clobber the IMM reg for ld/st reversedEdgar E. Iglesias
Do not clobber the IMM register on reversed load/stores. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2018-04-30target-microblaze: Fix trap checks for FPU insnsEdgar E. Iglesias
Fix trap checks for FPU insns when extended FPU insns are enabled. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2018-04-30target-microblaze: Respect MSR.PVR as read-onlyEdgar E. Iglesias
Respect MSR.PVR as read-only. We were wrongly overwriting the PVR bit. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2018-04-30m68k: fix subx mem, mem instructionPavel Dovgalyuk
This patch fixes decrement of the pointers for subx mem, mem instructions. Without the patch pointers are decremented by OS_* constant value instead of retrieving the corresponding data size and using it as a decrement. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20180418064152.24606.71975.stgit@pasha-VirtualBox> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2018-04-27target/ppc: Don't bother with MSR_EP in cpu_ppc_set_papr()David Gibson
cpu_ppc_set_papr() removes the EP and HV bits from the MSR mask. While removing the HV bit makes sense (a cpu in PAPR mode should never be emulated in hypervisor mode), the EP bit is just bizarre. Although it's true that a papr mode guest shouldn't be able to change the exception prefix, the MSR[EP] bit doesn't even exist on the cpus supported for PAPR mode, so it's pointless to do anything with it here. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com>
2018-04-27target/ppc: Fold slb_nr into PPCHash64OptionsDavid Gibson
The env->slb_nr field gives the size of the SLB (Segment Lookaside Buffer). This is another static-after-initialization parameter of the specific version of the 64-bit hash MMU in the CPU. So, this patch folds the field into PPCHash64Options with the other hash MMU options. This is a bit more complicated that the things previously put in there, because slb_nr was foolishly included in the migration stream. So we need some of the usual dance to handle backwards compatible migration. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Get rid of POWERPC_MMU_VER() macrosDavid Gibson
These macros were introduced to deal with the fact that the mmu_model field has bit flags mixed in with what's otherwise an enum of various mmu types. We've now eliminated all those flags except for one, and that one - POWERPC_MMU_64 - is already included/compared in the MMU_VER macros. So, we can get rid of those macros and just directly compare mmu_model values in the places it was used. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Remove unnecessary POWERPC_MMU_V3 flag from mmu_modelDavid Gibson
The only place we test this flag is in conjunction with ppc64_use_proc_tbl(). That checks for the LPCR_UPRT bit, which we already ensure can't be set except on a machine with a v3 MMU (i.e. POWER9). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Fold ci_large_pages flag into PPCHash64OptionsDavid Gibson
The ci_large_pages boolean in CPUPPCState is only relevant to 64-bit hash MMU machines, indicating whether it's possible to map large (> 4kiB) pages as cache-inhibitied (i.e. for IO, rather than memory). Fold it as another flag into the PPCHash64Options structure. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Move 1T segment and AMR options to PPCHash64OptionsDavid Gibson
Currently env->mmu_model is a bit of an unholy mess of an enum of distinct MMU types, with various flag bits as well. This makes which bits of the field should be compared pretty confusing. Make a start on cleaning that up by moving two of the flags bits - POWERPC_MMU_1TSEG and POWERPC_MMU_AMR - which are specific to the 64-bit hash MMU into a new flags field in PPCHash64Options structure. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Make hash64_opts field mandatory for 64-bit hash MMUsDavid Gibson
Currently some cpus set the hash64_opts field in the class structure, with specific details of their variant of the 64-bit hash mmu. For the remaining cpus with that mmu, ppc_hash64_realize() fills in defaults. But there are only a couple of cpus that use those fallbacks, so just have them to set the has64_opts field instead, simplifying the logic. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Split page size information into a separate allocationDavid Gibson
env->sps contains page size encoding information as an embedded structure. Since this information is specific to 64-bit hash MMUs, split it out into a separately allocated structure, to reduce the basic env size for other cpus. Along the way we make a few other cleanups: * Rename to PPCHash64Options which is more in line with qemu name conventions, and reflects that we're going to merge some more hash64 mmu specific details in there in future. Also rename its substructures to match qemu conventions. * Move structure definitions to the mmu-hash64.[ch] files. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cédric Le Goater <clg@kaod.org>
2018-04-27target/ppc: Move page size setup to helper functionDavid Gibson
Initialization of the env->sps structure at the end of instance_init is specific to the 64-bit hash MMU, so move the code into a helper function in mmu-hash64.c. We also create a corresponding function to be called at finalize time - it's empty for now, but we'll need it shortly. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Remove fallback 64k pagesize informationDavid Gibson
CPU definitions for cpus with the 64-bit hash MMU can include a table of available pagesizes. If this isn't supplied ppc_cpu_instance_init() will fill it in a fallback table based on the POWERPC_MMU_64K bit in mmu_model. However, it turns out all the cpus which support 64K pages already include an explicit table of page sizes, so there's no point to the fallback table including 64k pages. That removes the only place which tests POWERPC_MMU_64K, so we can remove it. Which in turn allows some logic to be removed from kvm_fixup_page_sizes(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Avoid taking "env" parameter to mmu-hash64 functionsDavid Gibson
In most cases we prefer to pass a PowerPCCPU rather than the (embedded) CPUPPCState. For ppc_hash64_update_{rmls,vrma}() change to take "cpu" instead of "env". For ppc_hash64_set_{dsi,isi}() remove the redundant "env" parameter. In theory this makes more work for the functions, but since "cs", "cpu" and "env" are related by at most constant offsets, the compiler should be able to optimize out the difference at effectively zero cost. helper_*() functions are left alone - since they're more closely tied to the TCG generated code, passing "env" is still the standard there. While we're there, fix an incorrect indentation. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cédric Le Goater <clg@kaod.org>
2018-04-27target/ppc: Simplify cpu valid check in ppc_cpu_realizeDavid Gibson
The #if isn't necessary, because there's a suitable one inside ppc_cpu_is_valid(). We've already filtered for suitable cpu models in the functions that search and register them. So by the time we get to realize having an invalid one indicates a code error, not a user error, so an assert() is more appropriate than error_setg(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27target/ppc: Standardize instance_init and realize function namesDavid Gibson
Because of the various hooks called some variant on "init" - and the rather greater number that used to exist, I'm always wondering when a function called simply "*_init" or "*_initfn" will be called. To make it easier on myself, and maybe others, rename the instance_init hooks for ppc cpus to *_instance_init(). While we're at it rename the realize time hooks to *_realize() (from *_realizefn()) which seems to be the more common current convention. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-27Add host_memory_backend_pagesize() helperDavid Gibson
There are a couple places (one generic, one target specific) where we need to get the host page size associated with a particular memory backend. I have some upcoming code which will add another place which wants this. So, for convenience, add a helper function to calculate this. host_memory_backend_pagesize() returns the host pagesize for a given HostMemoryBackend object. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2018-04-27Make qemu_mempath_getpagesize() accept NULLDavid Gibson
qemu_mempath_getpagesize() gets the effective (host side) page size for a block of memory backed by an mmap()ed file on the host. It requires the mem_path parameter to be non-NULL. This ends up meaning all the callers need a different case for handling anonymous memory (for memory-backend-ram or default memory with -mem-path is not specified). We can make all those callers a little simpler by having qemu_mempath_getpagesize() accept NULL, and treat that as the anonymous memory case. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
2018-04-27target/ppc: Fix reserved bit mask of dstst instructionBALATON Zoltan
According to the Vector/SIMD extension documentation bit 6 that is currently masked is valid (listed as transient bit) but bits 7 and 8 should be reserved instead. Fix the mask to match this. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-04-27ppc: Fix size of ppc64 xer registerMichael Matz
The normal gdb definition of the XER registers is only 32 bit, and that's what the current version of power64-core.xml also says (seems copied from gdb's). But qemu's idea of the XER register is target_ulong (in CPUPPCState, ppc_gdb_register_len and ppc_cpu_gdb_read_register) That mismatch leads to the following message when attaching with gdb: Truncated register 32 in remote 'g' packet (and following on that qemu stops responding). The simple fix is to say the truth in the .xml file. But the better fix is to actually make it 32bit on the wire, as old gdbs don't support XML files for describing registers. Also the XER state in qemu doesn't seem to use the high 32 bits, so sending it off to gdb doesn't seem worthwhile. Signed-off-by: Michael Matz <matz@suse.de> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-04-26target/arm: Make PMOVSCLR and PMUSERENR 64 bits wideAaron Lindsay
This is a bug fix to ensure 64-bit reads of these registers don't read adjacent data. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-13-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Fix bitmask for PMCCFILTR writesAaron Lindsay
It was shifted to the left one bit too few. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1523997485-1905-10-git-send-email-alindsay@codeaurora.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Allow EL change hooks to do IOAaron Lindsay
During code generation, surround CPSR writes and exception returns which call the EL change hooks with gen_io_start/end. The immediate need is for the PMU to access the clock and icount during EL change to support mode filtering. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-9-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Add pre-EL change hooksAaron Lindsay
Because the design of the PMU requires that the counter values be converted between their delta and guest-visible forms for mode filtering, an additional hook which occurs before the EL is changed is necessary. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-8-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Support multiple EL change hooksAaron Lindsay
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-7-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Fetch GICv3 state directly from CPUARMStateAaron Lindsay
This eliminates the need for fetching it from el_change_hook_opaque, and allows for supporting multiple el_change_hooks without having to hack something together to find the registered opaque belonging to GICv3. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1523997485-1905-6-git-send-email-alindsay@codeaurora.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Mask PMU register writes based on PMCR_EL0.NAaron Lindsay
This is in preparation for enabling counters other than PMCCNTR Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1523997485-1905-5-git-send-email-alindsay@codeaurora.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Treat PMCCNTR as alias of PMCCNTR_EL0Aaron Lindsay
They share the same underlying state Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1523997485-1905-3-git-send-email-alindsay@codeaurora.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Check PMCNTEN for whether PMCCNTR is enabledAaron Lindsay
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1523997485-1905-2-git-send-email-alindsay@codeaurora.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-26target/arm: Use v7m_stack_read() for reading the frame signaturePeter Maydell
In commit 95695effe8caa552b8f2 we changed the v7M/v8M stack pop code to use a new v7m_stack_read() function that checks whether the read should fail due to an MPU or bus abort. We missed one call though, the one which reads the signature word for the callee-saved register part of the frame. Correct the omission. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180419142106.9694-1-peter.maydell@linaro.org
2018-04-26target/arm: Remove stale TODO commentPeter Maydell
Remove a stale TODO comment -- we have now made the arm_ldl_ptw() and arm_ldq_ptw() functions propagate physical memory read errors out to their callers. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180419142151.9862-1-peter.maydell@linaro.org
2018-04-16i386: Don't automatically enable FEAT_KVM_HINTS bitsEduardo Habkost
The assumption in the cpu->max_features code is that anything enabled on GET_SUPPORTED_CPUID should be enabled on "-cpu host". This shouldn't be the case for FEAT_KVM_HINTS. This adds a new FeatureWordInfo::no_autoenable_flags field, that can be used to prevent FEAT_KVM_HINTS bits to be enabled automatically. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20180410211534.26079-1-ehabkost@redhat.com> Tested-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-04-15m68k: fix exception stack frame for 68000Pavel Dovgalyuk
68000 CPUs do not save format in the exception stack frame. This patch adds feature checking to prevent format saving for 68000. m68k_ret() already includes this modification, this patch fixes the exception processing function too. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20180413133041.29509.59064.stgit@pasha-VirtualBox> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2018-04-11icount: fix cpu_restore_state_from_tb for non-tb-exit casesPavel Dovgalyuk
In icount mode, instructions that access io memory spaces in the middle of the translation block invoke TB recompilation. After recompilation, such instructions become last in the TB and are allowed to access io memory spaces. When the code includes instruction like i386 'xchg eax, 0xffffd080' which accesses APIC, QEMU goes into an infinite loop of the recompilation. This instruction includes two memory accesses - one read and one write. After the first access, APIC calls cpu_report_tpr_access, which restores the CPU state to get the current eip. But cpu_restore_state_from_tb resets the cpu->can_do_io flag which makes the second memory access invalid. Therefore the second memory access causes a recompilation of the block. Then these operations repeat again and again. This patch moves resetting cpu->can_do_io flag from cpu_restore_state_from_tb to cpu_loop_exit* functions. It also adds a parameter for cpu_restore_state which controls restoring icount. There is no need to restore icount when we only query CPU state without breaking the TB. Restoring it in such cases leads to the incorrect flow of the virtual time. In most cases new parameter is true (icount should be recalculated). But there are two cases in i386 and openrisc when the CPU state is only queried without the need to break the TB. This patch fixes both of these cases. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <20180409091320.12504.35329.stgit@pasha-VirtualBox> [rth: Make can_do_io setting unconditional; move from cpu_exec; make cpu_loop_exit_{noexc,restore} call cpu_loop_exit.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-04-10Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.12-20180410' ↵Peter Maydell
into staging ppc patch queue 2018-04-10 Here's a rather late pull request with a handful of fixes for 2.12. These have been blocked for some time, because I wasn't able to complete my usual test set due to the SCSI problem fixed in 37c5174 "scsi-disk: Don't enlarge min_io_size to max_io_size". Since we're in hard freeze, these are all bugfixes. Most are also regressions, although in one case it's only a "regression" because a longstanding bug has been exposed by a new machine type (sam460ex) in the testcases. There are also a couple of sam460ex fixes that aren't regressions since the board didn't exist before. On the flipside though, they're low risk because they only touch board specific code for a board that doesn't exist in any released version. # gpg: Signature made Tue 10 Apr 2018 08:13:52 BST # gpg: using RSA key 6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-2.12-20180410: roms/u-boot-sam460ex: Change to qemu git mirror and update sam460ex: Fix timer frequency and clock multipliers tests/boot-serial: Test the sam460ex board spapr: Initialize reserved areas list in FDT in H_CAS handler target/ppc: Fix backwards migration of msr_mask hw/misc/macio: Fix crash when listing device properties of macio device target/ppc: Initialize lazy_tlb_flush correctly Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-10tcg: Introduce tcg_set_insn_start_paramRichard Henderson
The parameters for tcg_gen_insn_start are target_ulong, which may be split into two TCGArg parameters for storage in the opcode on 32-bit hosts. Fixes the ARM target and its direct use of tcg_set_insn_param, which would set the wrong argument in the 64-on-32 case. Cc: qemu-stable@nongnu.org Reported-by: alarson@ddci.com Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180410003558.2470-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-10target/arm: Report unsupported MPU region sizes more clearlyPeter Maydell
Currently our PMSAv7 and ARMv7M MPU implementation cannot handle MPU region sizes smaller than our TARGET_PAGE_SIZE. However we report that in a slightly confusing way: DRSR[3]: No support for MPU (sub)region alignment of 9 bits. Minimum is 10 The problem is not the alignment of the region, but its size; tweak the error message to say so: DRSR[3]: No support for MPU (sub)region size of 512 bytes. Minimum is 1024. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180405172554.27401-1-peter.maydell@linaro.org
2018-04-10target-arm: Check undefined opcodes for SWP in A32 decoderOnur Sahin
Make sure we are not treating architecturally Undefined instructions as a SWP, by verifying the opcodes as per section A8.8.229 of ARMv7-A specification. Bits [21:20] must be zero for this to be a SWP or SWPB. We also choose to UNDEF for the architecturally UNPREDICTABLE case of bits [11:8] not being zero. Signed-off-by: Onur Sahin <onursahin08@gmail.com> [PMM: tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-10target/ppc: Fix backwards migration of msr_maskDavid Gibson
21b786f "PowerPC: Add TS bits into msr_mask" added the transaction states to msr_mask for recent POWER CPUs to allow correct migration of machines that are in certain interim transactional memory states. This was correct, but unfortunately breaks backwards of pseries-2.7 and earlier machine types which (stupidly) transferred the msr_mask in the migration stream and failed if it wasn't equal on each end. This works around the problem by masking out the new MSR bits in the compatibility code to send the msr_mask on old machine types. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Tested-by: Greg Kurz <groug@kaod.org> Tested-by: Lukáš Doktor <ldoktor@redhat.com>
2018-04-10target/ppc: Initialize lazy_tlb_flush correctlyDavid Gibson
ppc_tr_init_disas_context() correctly sets lazy_tlb_flush to true on certain CPU models. However, it leaves it uninitialized, instead of setting it to false on all others. It wasn't caught before now because we didn't have examples in the tests that exercised this path. However it can now be caught using clang's undefined behaviour sanitizer and the sam460ex board. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org>
2018-04-09Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20180409' into stagingPeter Maydell
Fixes for s390x: kvm, vfio-ccw, ipl code, bios. Includes a rebuild of s390-ccw.img and s390-netboot.img. # gpg: Signature made Mon 09 Apr 2018 16:08:19 BST # gpg: using RSA key DECF6B93C6F02FAF # gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" # gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" # gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" # gpg: aka "Cornelia Huck <cohuck@kernel.org>" # gpg: aka "Cornelia Huck <cohuck@redhat.com>" # Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF * remotes/cohuck/tags/s390x-20180409: s390x: load_psw() should only exchange the PSW for KVM s390x/mmu: don't overwrite pending exception in mmu translate vfio-ccw: fix memory leaks in vfio_ccw_realize() pc-bios/s390: update images s390: Do not pass inofficial IPL type to the guest s390: Ensure IPL from SCSI works as expected s390: Refactor IPL parameter block generation s390x/kvm: call cpu_synchronize_state() on every kvm_arch_handle_exit() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-09Add missing bit for SSE instr in VEX decodingEugene Minibaev
The 2-byte VEX prefix imples a leading 0Fh opcode byte. Signed-off-by: Eugene Minibaev <mail@kitsu.me> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-04-09i386/hyperv: error out if features requested but unsupportedRoman Kagan
In order to guarantee compatibility on migration, QEMU should have complete control over the features it announces to the guest via CPUID. However, for a number of Hyper-V-related cpu properties, if the corresponding feature is not supported by the underlying KVM, the propery is silently ignored and the feature is not announced to the guest. Refuse to start with an error instead. Signed-off-by: Roman Kagan <rkagan@virtuozzo.com> Message-Id: <20180330170209.20627-3-rkagan@virtuozzo.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>