aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2017-10-06target/arm: Fix calculation of secure mm_idx valuesPeter Maydell
In cpu_mmu_index() we try to do this: if (env->v7m.secure) { mmu_idx += ARMMMUIdx_MSUser; } but it will give the wrong answer, because ARMMMUIdx_MSUser includes the 0x40 ARM_MMU_IDX_M field, and so does the mmu_idx we're adding to, and we'll end up with 0x8n rather than 0x4n. This error is then nullified by the call to arm_to_core_mmu_idx() which masks out the high part, but we're about to factor out the code that calculates the ARMMMUIdx values so it can be used without passing it through arm_to_core_mmu_idx(), so fix this bug first. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Implement security attribute lookups for memory accessesPeter Maydell
Implement the security attribute lookups for memory accesses in the get_phys_addr() functions, causing these to generate various kinds of SecureFault for bad accesses. The major subtlety in this code relates to handling of the case when the security attributes the SAU assigns to the address don't match the current security state of the CPU. In the ARM ARM pseudocode for validating instruction accesses, the security attributes of the address determine whether the Secure or NonSecure MPU state is used. At face value, handling this would require us to encode the relevant bits of state into mmu_idx for both S and NS at once, which would result in our needing 16 mmu indexes. Fortunately we don't actually need to do this because a mismatch between address attributes and CPU state means either: * some kind of fault (usually a SecureFault, but in theory perhaps a UserFault for unaligned access to Device memory) * execution of the SG instruction in NS state from a Secure & NonSecure code region The purpose of SG is simply to flip the CPU into Secure state, so we can handle it by emulating execution of that instruction directly in arm_v7m_cpu_do_interrupt(), which means we can treat all the mismatch cases as "throw an exception" and we don't need to encode the state of the other MPU bank into our mmu_idx values. This commit doesn't include the actual emulation of SG; it also doesn't include implementation of the IDAU, which is a per-board way to specify hard-coded memory attributes for addresses, which override the CPU-internal SAU if they specify a more secure setting than the SAU is programmed to. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
2017-10-06nvic: Implement Security Attribution Unit registersPeter Maydell
Implement the register interface for the SAU: SAU_CTRL, SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the actual behaviour is implemented here; registers just read back as written. When the CPU definition for Cortex-M33 is eventually added, its initfn will set cpu->sau_sregion, in the same way that we currently set cpu->pmsav7_dregion for the M3 and M4. Number of SAU regions is typically a configurable CPU parameter, but this patch doesn't provide a QEMU CPU property for it. We can easily add one when we have a board that requires it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add v8M support to exception entry codePeter Maydell
Add support for v8M and in particular the security extension to the exception entry code. This requires changes to: * calculation of the exception-return magic LR value * push the callee-saves registers in certain cases * clear registers when taking non-secure exceptions to avoid leaking information from the interrupted secure code * switch to the correct security state on entry * use the vector table for the security state we're targeting Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add support for restoring v8M additional state contextPeter Maydell
For v8M, exceptions from Secure to Non-Secure state will save callee-saved registers to the exception frame as well as the caller-saved registers. Add support for unstacking these registers in exception exit when necessary. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Update excret sanity checks for v8MPeter Maydell
In v8M, more bits are defined in the exception-return magic values; update the code that checks these so we accept the v8M values when the CPU permits them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add new-in-v8M SFSR and SFARPeter Maydell
Add the new M profile Secure Fault Status Register and Secure Fault Address Register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't warn about exception return with PC low bit set for v8MPeter Maydell
In the v8M architecture, return from an exception to a PC which has bit 0 set is not UNPREDICTABLE; it is defined that bit 0 is discarded [R_HRJH]. Restrict our complaint about this to v7M. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Warn about restoring to unaligned stackPeter Maydell
Attempting to do an exception return with an exception frame that is not 8-aligned is UNPREDICTABLE in v8M; warn about this. (It is not UNPREDICTABLE in v7M, and our implementation can handle the merely-4-aligned case fine, so we don't need to do anything except warn.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Check for xPSR mismatch usage faults earlier for v8MPeter Maydell
ARM v8M specifies that the INVPC usage fault for mismatched xPSR exception field and handler mode bit should be checked before updating the PSR and SP, so that the fault is taken with the existing stack frame rather than by pushing a new one. Perform this check in the right place for v8M. Since v7M specifies in its pseudocode that this usage fault check should happen later, we have to retain the original code for that check rather than being able to merge the two. (The distinction is architecturally visible but only in very obscure corner cases like attempting an invalid exception return with an exception frame in read only memory.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore SPSEL to correct CONTROL register on exception returnPeter Maydell
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic value should be restored to the SPSEL bit in the CONTROL register banked specified by the EXC_RETURN.ES bit. Add write_v7m_control_spsel_for_secstate() which behaves like write_v7m_control_spsel() but allows the caller to specify which CONTROL bank to use, reimplement write_v7m_control_spsel() in terms of it, and use it in exception return. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore security state on exception returnPeter Maydell
Now that we can handle the CONTROL.SPSEL bit not necessarily being in sync with the current stack pointer, we can restore the correct security state on exception return. This happens before we start to read registers off the stack frame, but after we have taken possible usage faults for bad exception return magic values and updated CONTROL.SPSEL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler modePeter Maydell
In the v7M architecture, there is an invariant that if the CPU is in Handler mode then the CONTROL.SPSEL bit cannot be nonzero. This in turn means that the current stack pointer is always indicated by CONTROL.SPSEL, even though Handler mode always uses the Main stack pointer. In v8M, this invariant is removed, and CONTROL.SPSEL may now be nonzero in Handler mode (though Handler mode still always uses the Main stack pointer). In preparation for this change, change how we handle this bit: rename switch_v7m_sp() to the now more accurate write_v7m_control_spsel(), and make it check both the handler mode state and the SPSEL bit. Note that this implicitly changes the point at which we switch active SP on exception exit from before we pop the exception frame to after it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't switch to target stack early in v7M exception returnPeter Maydell
Currently our M profile exception return code switches to the target stack pointer relatively early in the process, before it tries to pop the exception frame off the stack. This is awkward for v8M for two reasons: * in v8M the process vs main stack pointer is not selected purely by the value of CONTROL.SPSEL, so updating SPSEL and relying on that to switch to the right stack pointer won't work * the stack we should be reading the stack frame from and the stack we will eventually switch to might not be the same if the guest is doing strange things Change our exception return code to use a 'frame pointer' to read the exception frame rather than assuming that we can switch the live stack pointer this early. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
2017-10-06arm: Fix SMC reporting to EL2 when QEMU provides PSCIJan Kiszka
This properly forwards SMC events to EL2 when PSCI is provided by QEMU itself and, thus, ARM_FEATURE_EL3 is off. Found and tested with the Jailhouse hypervisor. Solution based on suggestions by Peter Maydell. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-06s390x/tcg: initialize machine check queueCornelia Huck
Just as for external interrupts and I/O interrupts, we need to initialize mchk_index during cpu reset. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390/kvm: Support for get/set of extended TOD-Clock for guestCollin L. Walling
Provides an interface for getting and setting the guest's extended TOD-Clock via a single ioctl to kvm. If the ioctl fails because it is not support by kvm, then we fall back to the old style of retrieving the clock via two ioctls. Signed-off-by: Collin L. Walling <walling@linux.vnet.ibm.com> Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> [split failure change from epoch index change] Message-Id: <20171004105751.24655-2-borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> [some cosmetic fixes]
2017-10-06s390x/tcg: make STFL store into the lowcoreDavid Hildenbrand
Using virtual memory access is wrong and will soon include low-address protection checks, which is to be bypassed for STFL. STFL is a privileged instruction and using LowCore requires !CONFIG_USER_ONLY, so add the ifdef and move the declaration to the right place. This was originally part of a bigger STFL(E) refactoring. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170927170027.8539-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: introduce and use S390_MAX_CPUSDavid Hildenbrand
Will be handy in the future. Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-6-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06target/s390x: get rid of next_core_idDavid Hildenbrand
core_id is not needed by linux-user, as the core_id a.k.a. CPU address is only accessible from kernel space. Therefore, drop next_core_id and make cpu_index get autoassigned again for linux-user. While at it, shield core_id and cpuid completely from linux-user. cpuid can also only be queried from kernel space. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-5-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/cpumodel: fix max STFL(E) bit numberDavid Hildenbrand
Not that it would matter in the near future, but it is actually 2048 bytes, therefore 16384 possible bits. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-4-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: raise CPU hotplug irq after really hotpluggedDavid Hildenbrand
Let's move it into the machine, so we trigger the IRQ after setting ms->possible_cpus (which SCLP uses to construct the list of online CPUs). This also fixes a problem reported by Thomas Huth, whereby qemu can be crashed using the none machine qemu-s390x-softmmu -M none -monitor stdio -> device_add qemu-s390-cpu Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928134609.16985-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make idte/ipte use the new _real mmuDavid Hildenbrand
We don't wrap addresses in the mmu for the _real case, therefore the behavior should be unchanged. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-7-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make testblock use the new _real mmuDavid Hildenbrand
Low address protection checks will be moved into the mmu later. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-6-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make stora(g) use the new _real mmuDavid Hildenbrand
As we properly handle the return address now, we can drop potential_page_fault(). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-5-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: make lura(g) use the new _real mmu.David Hildenbrand
Looks like, lurag was not loading 64bit but only 32bit. As we properly handle the return address now, we can drop potential_page_fault(). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: add MMU for real addressesDavid Hildenbrand
This makes it easy to access real addresses (prefix) and in addition checks for valid memory addresses, which is missing when using e.g. stl_phys(). We can later reuse it to implement low address protection checks (then we might even decide to introduce yet another MMU for absolute addresses, just for handling storage keys and low address protection). Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-3-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: fix checking for invalid memory checkDavid Hildenbrand
It should have been a >=, but let's directly perform a proper access check to also be able to deal with hotplugged memory later. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170926183318.12995-2-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/kvm: fix and cleanup storing CPU statusDavid Hildenbrand
env->psa is a 64bit value, while we copy 4 bytes into the save area, resulting always in 0 getting stored. Let's try to reduce such errors by using a proper structure. While at it, use correct cpu->be conversion (and get_psw_mask()), as we will be reusing this code for TCG soon. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170922140338.6068-1-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x: use generic cpu_model parsingIgor Mammedov
Define default CPU type in generic way in machine class_init and let common machine code handle cpu_model parsing. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <1505998749-269631-1-git-send-email-imammedo@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: add basic MSA featuresDavid Hildenbrand
The STFLE bits for the MSA (extension) facilities simply indicate that the respective instructions can be executed. The QUERY subfunction can then be used to identify which features exactly are available. Availability of subfunctions can also vary on real hardware. For now, we simply implement a CPU model without any available subfunctions except QUERY (which is always around). As all MSA functions behave quite similarly, we can use one translation handler for now. Prepare the code for implementation of actual subfunctions. At least MSA is helpful for now, as older Linux kernels require this facility when compiled for a z9 model. Allow to enable the facilities for the qemu cpu model. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: move wrap_address() to internal.hDavid Hildenbrand
We want to use it in another file. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390x/tcg: implement spm (SET PROGRAM MASK)David Hildenbrand
Missing and is used inside Linux in the context of CPACF. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170920153016.3858-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-29console: purge curses bits from console.hGerd Hoffmann
Handle the translation from vga chars to curses chars in curses_update() instead of console_write_ch(). Purge any curses support bits from ui/console.h include file. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Message-id: 20170927103811.19249-1-kraxel@redhat.com
2017-09-27Merge remote-tracking branch ↵Peter Maydell
'remotes/dgilbert/tags/pull-migration-20170927a' into staging Migration pull 2017-09-27 # gpg: Signature made Wed 27 Sep 2017 14:56:23 BST # gpg: using RSA key 0x0516331EBC5BFDE7 # gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7 * remotes/dgilbert/tags/pull-migration-20170927a: migration: Route more error paths migration: Route errors up through vmstate_save migration: wire vmstate_save_state errors up to vmstate_subsection_save migration: Check field save returns migration: check pre_save return in vmstate_save_state migration: pre_save return int migration: disable auto-converge during bulk block migration Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20170927' ↵Peter Maydell
into staging ppc patch queue 2017-09-27 Contains * a number of Mac machine type fixes * a number of embedded machine type fixes (preliminary to adding the Sam460ex board) * a important fix for handling of migration with KVM PR * assorted other minor fixes and cleanups # gpg: Signature made Wed 27 Sep 2017 08:40:48 BST # gpg: using RSA key 0x6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-2.11-20170927: (26 commits) macio: use object link between MACIO_IDE and MAC_DBDMA object macio: pass channel into MACIOIDEState via qdev property mac_dbdma: remove DBDMA_init() function mac_dbdma: QOMify mac_dbdma: remove unused IO fields from DBDMAState spapr: fix the value of SDR1 in kvmppc_put_books_sregs() ppc/pnv: check for OPAL firmware file presence ppc: remove all unused CPU definitions ppc: remove unused CPU definitions spapr_pci: make index property mandatory macio: convert pmac_ide_ops from old_mmio ppc/pnv: Improve macro parenthesization spapr: introduce helpers to migrate HPT chunks and the end marker ppc/kvm: generalize the use of kvmppc_get_htab_fd() ppc/kvm: change kvmppc_get_htab_fd() to return -errno on error ppc: Fix OpenPIC model ppc/ide/macio: Add missing registers ppc/mac: More rework of the DBDMA emulation ppc/mac: Advertise a high clock frequency for NewWorld Macs ppc: QOMify g3beige machine ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-27migration: pre_save return intDr. David Alan Gilbert
Modify the pre_save method on VMStateDescription to return an int rather than void so that it potentially can fail. Changed zillions of devices to make them return 0; the only case I've made it return non-0 is hw/intc/s390_flic_kvm.c that already had an error_report/return case. Note: If you add an error exit in your pre_save you must emit an error_report to say why. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170925112917.21340-2-dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-09-27s390x/cpumodel: remove ais from z14 default model-> also for 2.10.1Christian Borntraeger
We disabled ais for 2.10, so let's also remove it from the z14 default model. Fixes: 3f2d07b3b01e ("s390x/ais: for 2.10 stable: disable ais facility") CC: qemu-stable@nongnu.org Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20170927072030.35737-2-borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-27spapr: fix the value of SDR1 in kvmppc_put_books_sregs()Greg Kurz
When running with KVM PR, if a new HPT is allocated we need to inform KVM about the HPT address and size. This is currently done by hacking the value of SDR1 and pushing it to KVM in several places. Also, migration breaks the guest since it is very unlikely the HPT has the same address in source and destination, but we push the incoming value of SDR1 to KVM anyway. This patch introduces a new virtual hypervisor hook so that the spapr code can provide the correct value of SDR1 to be pushed to KVM each time kvmppc_put_books_sregs() is called. It allows to get rid of all the hacking in the spapr/kvmppc code and it fixes migration of nested KVM PR. Suggested-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc: remove all unused CPU definitionsJohn Snow
Remove *all* unused CPU definitions as indicated by compile-time `#if 0` constructs. Signed-off-by: John Snow <jsnow@redhat.com> [dwg: Removed some additional now-useless comments] Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc: remove unused CPU definitionsJohn Snow
Following commit aef77960, remove now-unused definitions from cpu-models.h. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc/kvm: generalize the use of kvmppc_get_htab_fd()Greg Kurz
The use of KVM_PPC_GET_HTAB_FD is open-coded in kvmppc_read_hptes() and kvmppc_write_hpte(). This patch modifies kvmppc_get_htab_fd() so that it can be used everywhere we need to access the in-kernel htab: - add an index argument => only kvmppc_read_hptes() passes an actual index, all other users pass 0 - add an errp argument to propagate error messages to the caller. => spapr migration code prints the error => hpte helpers pass &error_abort to keep the current behavior of hw_error() While here, this also fixes a bug in kvmppc_write_hpte() so that it opens the htab fd for writing instead of reading as it currently does. This never broke anything because we currently never call this code, as explained in the changelog of commit c1385933804bb: "This support updating htab managed by the hypervisor. Currently we don't have any user for this feature. This actually bring the store_hpte interface in-line with the load_hpte one. We may want to use this when we want to emulate henter hcall in qemu for HV kvm." The above is still true today. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc/kvm: change kvmppc_get_htab_fd() to return -errno on errorGreg Kurz
When kvmppc_get_htab_fd() fails, its return value is propagated up to qemu_savevm_state_iterate() or to qemu_savevm_state_complete_precopy(). All savevm handlers expect to receive a negative errno on error. Let's patch kvmppc_get_htab_fd() accordingly. While here, let's change htab_load() in the spapr code to also propagate the error, since it doesn't make sense to abort() if we couldn't get the htab fd from KVM. Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc: Add 460EX embedded CPUBALATON Zoltan
Despite its name it is a 440 core CPU Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc/kvm: drop kvmppc_has_cap_htab_fd()Greg Kurz
It never got used since its introduction (commit 7c43bca004af). Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-27ppc/kvm: check some capabilities with kvm_vm_check_extension()Greg Kurz
The following capabilities are VM specific: - KVM_CAP_PPC_SMT_POSSIBLE - KVM_CAP_PPC_HTAB_FD - KVM_CAP_PPC_ALLOC_HTAB If both KVM HV and KVM PR are present, checking them always return the HV value, even if we explicitely requested to use PR. This has no visible effect for KVM_CAP_PPC_ALLOC_HTAB, because we also try the KVM_PPC_ALLOCATE_HTAB ioctl which is only suppored by HV. As a consequence, the spapr code doesn't even check KVM_CAP_PPC_HTAB_FD. However, this will cause kvmppc_hint_smt_possible(), introduced by commit fa98fbfcdfcb9, to report several VSMT modes (eg, Available VSMT modes: 8 4 2 1) whereas PR only support mode 1. This patch fixes all three anyway to use kvm_vm_check_extension(). It is okay since the VM is already created at the time kvm_arch_init() or kvmppc_reset_htab() is called. Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-09-26target/xtensa: Use the pre-defined MEMTXATTRS_UNSPECIFIED macroAlistair Francis
Instead of using the hardcoded (MemTxAttrs){0} for no memory attributes let's use the already defined MEMTXATTRS_UNSPECIFIED macro instead. This is technically a change of behaviour as MEMTXATTRS_UNSPECIFIED sets the unspecified field to 1, but it doesn't look like anything is checking this field. Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-09-23Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* Speed up AddressSpaceDispatch creation (Alexey) * Fix kvm.c assert (David) * Memory fixes and further speedup (me) * Persistent reservation manager infrastructure (me) * virtio-serial: add enable_backend callback (Pavel) * chardev GMainContext fixes (Peter) # gpg: Signature made Fri 22 Sep 2017 20:07:33 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (32 commits) chardev: remove context in chr_update_read_handler chardev: use per-dev context for io_add_watch_poll chardev: add Chardev.gcontext field chardev: new qemu_chr_be_update_read_handlers() scsi: add persistent reservation manager using qemu-pr-helper scsi: add multipath support to qemu-pr-helper scsi: build qemu-pr-helper scsi, file-posix: add support for persistent reservation management memory: Share special empty FlatView memory: seek FlatView sharing candidates among children subregions memory: trace FlatView creation and destruction memory: Create FlatView directly memory: Get rid of address_space_init_shareable memory: Rework "info mtree" to print flat views and dispatch trees memory: Do not allocate FlatView in address_space_init memory: Share FlatView's and dispatch trees between address spaces memory: Move address_space_update_ioeventfds memory: Alloc dispatch tree where topology is generared memory: Store physical root MR in FlatView memory: Rename mem_begin/mem_commit/mem_add helpers ... # Conflicts: # configure
2017-09-22s390x/ais: for 2.10 stable: disable ais facilityChristian Borntraeger
The migration interface for ais was introduced with kernel 4.13 but the capability itself had been active since 4.12. As migration support is considered necessary lets disable ais in the 2.10 stable version. A proper fix and re-enablement will be done for qemu 2.11. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <20170921140834.14233-2-borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-09-22memory: Get rid of address_space_init_shareableAlexey Kardashevskiy
Since FlatViews are shared now and ASes not, this gets rid of address_space_init_shareable(). This should cause no behavioural change. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20170921085110.25598-17-aik@ozlabs.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>