aboutsummaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2017-03-03target/ppc: Move no-execute and guarded page checking into new functionSuraj Jitindar Singh
A pte entry has bit fields which can be used to make a page no-execute or guarded, if either of these bits are set then an instruction access to this page will fail. Currently these bits are checked with the pp_prot function however the ISA specifies that the access authority controlled by the key-pp value pair should only be checked on an instruction access after the no-execute and guard bits have already been verified to permit the access. Move the no-execute and guard bit checking into a new separate function. Note that we can remove the check for the no-execute bit in the slb entry since this check was already performed above when we obtained the slb entry. In the event that the no-execute or guard bits are set, an ISI should be generated with the SRR1_NOEXEC_GUARD (0x10000000) bit set in srr1. Add a define for this for clarity. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> [dwg: Move constants to cpu.h since they're not MMUv3 specific] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc: Add execute permission checking to access authority checkSuraj Jitindar Singh
Basic storage protection defines various access authority permissions based on a slb storage key and pte pp value pair. This access authority defines read, write and execute permissions however currently we only use this to control read and write permissions and ignore the execute control. Fix the code to allow execute permissions based on the key-pp value pair. Execute is allowed under the same conditions which enable reads. (i.e. read permission -> execute permission) Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc: Add Instruction Authority Mask Register CheckSuraj Jitindar Singh
The instruction authority mask register (IAMR) can be used to restrict permissions for instruction fetch accesses on a per key basis for each of 32 different key values. Access permissions are derived based on the specific key value stored in the relevant page table entry. The IAMR was introduced in, and is present in processors since, POWER8 (ISA v2.07). Thus introduce a function to check access permissions based on the pte key value and the contents of the IAMR when handling a page fault to ensure sufficient access permissions for an instruction fetch. A hash pte contains a key value in bits 2:3|52:54 of the second double word of the pte, this key value gives an index into the IAMR which contains 32 2-bit access masks. If the least significant bit of the 2-bit access mask corresponding to the given key value is set (IAMR[key] & 0x1 == 0x1) then the instruction fetch is not permitted and an ISI is generated accordingly. While we're here, add defines for the srr1 bits to be set for the ISI for clarity. e.g. pte: dw0 [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] dw1 [XX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX010XXXXXXXXX] ^^ ^^^ key = 01010 (0x0a) IAMR: [XXXXXXXXXXXXXXXXXXXX01XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] ^^ Access mask = 0b01 Test access mask: 0b01 & 0x1 == 0x1 Least significant bit of the access mask is set, thus the instruction fetch is not permitted. We should generate an instruction storage interrupt (ISI) with bit 42 of SRR1 set to indicate access precluded by virtual page class key protection. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> [dwg: Move new constants to cpu.h, since they're not MMUv3 specific] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc/POWER9: Add cpu_has_work function for POWER9Suraj Jitindar Singh
The cpu has work function is used to mask interrupts used to determine if there is work for the cpu based on the LPCR. Add a function to do this for POWER9 and add it to the POWER9 cpu definition. This is similar to that for POWER8 except using the LPCR bits as defined for POWER9. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc/POWER9: Add POWER9 mmu fault handlerSuraj Jitindar Singh
Add a new mmu fault handler for the POWER9 cpu and add it as the handler for the POWER9 cpu definition. This handler checks if the guest is radix or hash based on the value in the partition table entry and calls the correct fault handler accordingly. The hash fault handling code has also been updated to check if the partition is using segment tables. Currently only legacy hash (no segment tables) is supported. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc: Don't gen an SDR1 on POWER9 and rework register creationSuraj Jitindar Singh
POWER9 doesn't have a storage description register 1 (SDR1) which is used to store the base and size of the hash table. Thus we don't need to generate this register on the POWER9 cpu model. While we're here, the register generation code for 970, POWER5+, POWER<7/8/9> in general is a mess where we call a generic function from a model specific function which then attempts to call model specific functions, so rework this for readability. We update ppc_cpu_dump_state so that "info registers" will only display the value of sdr1 if the register has been generated. As mentioned above the register generation for the pcc->init_proc function for 970, POWER5+, POWER7, POWER8 and POWER9 has been reworked for improved clarity. Instead of calling init_proc_book3s_64 which then attempts to generate the correct registers through a mess of if statements, we remove this function and instead call the appropriate register generation functions directly. This follows the register generation model used for earlier cpu models (pre-970) whereby cpu specific registers are generated directly in the init_proc function and makes it easier to add/remove specific registers for new cpu models. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc: Add patb_entry to sPAPRMachineStateSuraj Jitindar Singh
ISA v3.00 adds the idea of a partition table which is used to store the address translation details for all partitions on the system. The partition table consists of double word entries indexed by partition id where the second double word contains the location of the process table in guest memory. The process table is registered by the guest via a h-call. We need somewhere to store the address of the process table so we add an entry to the sPAPRMachineState struct called patb_entry to represent the second doubleword of a single partition table entry corresponding to the current guest. We need to store this value so we know if the guest is using radix or hash translation and the location of the corresponding process table in guest memory. Since we only have a single guest per qemu instance, we only need one entry. Since the partition table is technically a hypervisor resource we require that access to it is abstracted by the virtual hypervisor through the get_patbe() call. Currently the value of the entry is never set (and thus defaults to 0 indicating hash), but it will be required to both implement POWER9 kvm support and tcg radix support. We also add this field to be migrated as part of the sPAPRMachineState as we will need it on the receiving side as the guest will never tell us this information again and we need it to perform translation. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc/POWER9: Add POWERPC_MMU_V3 bitDavid Gibson
For easier handling of future processors using the POWER9 or something close to it, add a new bit in the MMU model. This was originally from a revised version of 86cf1e9 "target/ppc/POWER9: Add ISAv3.00 MMU definition" but the older version of the patch was already merged. This makes the change on top of the original version. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03exec, kvm, target-ppc: Move getrampagesize() to common codeAlexey Kardashevskiy
getrampagesize() returns the largest supported page size and mainly used to know if huge pages are enabled. However is implemented in target-ppc/kvm.c and not available in TCG or other architectures. This renames and moves gethugepagesize() to mmap-alloc.c where fd-based analog of it is already implemented. This renames and moves getrampagesize() to exec.c as it seems to be the common place for helpers like this. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-03target/ppc: Add POWER9/ISAv3.00 to compat_tableSuraj Jitindar Singh
compat_table contains the list of logical pvr compat modes which a cpu can operate in. It is a list of struct CompatInfo which contains the given pvr value for a compat mode, the pcr bits which should be set to operate in that compat mode, the pcr level which must be present in pcr_supported for a processor to support that compat mode and the max threads possible in that compat mode. Add an entry for the POWER9/ISAv3.00 logical pvr which represents a processor running with support for logical pvr 0x0f000005. A processor running in this mode should have PCR_COMPAT_3_00 set in the pcr (if available in pcr_mask) and should have PCR_COMPAT_3_00 in pcr_supported to indicate that it is capable of running in this compat mode. Also add PCR_COMPAT_3_00 to the bits which must be set for all previous compat modes. Since no processor models contain this bit yet in pcr_mask it will never be set, but this ensures we don't forget to in the future. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-02Merge remote-tracking branch 'remotes/rth/tags/pull-tgt-20170302' into stagingPeter Maydell
Queued sparc patch # gpg: Signature made Wed 01 Mar 2017 19:53:21 GMT # gpg: using RSA key 0xAD1270CC4DD0279B # gpg: Good signature from "Richard Henderson <rth7680@gmail.com>" # gpg: aka "Richard Henderson <rth@redhat.com>" # gpg: aka "Richard Henderson <rth@twiddle.net>" # Primary key fingerprint: 9CB1 8DDA F8E8 49AD 2AFC 16A4 AD12 70CC 4DD0 279B * remotes/rth/tags/pull-tgt-20170302: target/sparc: Restore ldstub of odd asis Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170301' ↵Peter Maydell
into staging ppc patch queue for 2017-03-01 I was hoping to get this pull request squeezed in before the soft freeze, but I ran into some difficulties during testing. Everything here was at least posted before the soft freeze, so I'm hoping we can still merge it for 2.9. The biggest things here are: * Cleanups to handling of hashed page tables, that will make adding support for the POWER9 MMU easier * Cleanups to the XICS interrupt controller that will make implementing the powernv machine easier * TCG implementation of extended overflow and carry handling for POWER9 It also includes: * Increasing the CPU limit for pseries to 1024 vCPUs * Generating proper OF node names in qemu (making hotplug and coldplug logic closer together) # gpg: Signature made Wed 01 Mar 2017 04:43:06 GMT # gpg: using RSA key 0x6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-2.9-20170301: (50 commits) Add PowerPC 32-bit guest memory dump support ppc/xics: rename 'ICPState *' variables to 'icp' ppc/xics: move InterruptStatsProvider to the sPAPR machine ppc/xics: move ics-simple post_load under the machine ppc/xics: remove the XICSState classes ppc/xics: export the XICS init routines ppc/xics: move the ICP array under the sPAPR machine ppc/xics: register the reset handler of ICP objects ppc/xics: simplify spapr_dt_xics() interface ppc/xics: use the QOM interface to grab an ICP ppc/xics: move the cpu_setup() handler under the ICPState class ppc/xics: simplify the cpu_setup() handler ppc/xics: move kernel_xics_fd out of KVMXICSState ppc/xics: extend the QOM interface to handle ICPs ppc/xics: remove the XICS list of ICS ppc/xics: register the reset handler of ICS objects ppc/xics: remove xics_find_source() ppc/xics: use the QOM interface to resend irqs ppc/xics: use the QOM interface to get irqs ppc/xics: use the QOM interface under the sPAPR machine ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into ↵Peter Maydell
staging x86 queue, 2017-02-27 "-cpu max" and query-cpu-model-expansion support for x86. This should be the last x86 pull request before 2.9 soft freeze. # gpg: Signature made Mon 27 Feb 2017 16:24:15 GMT # gpg: using RSA key 0x2807936F984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/x86-pull-request: i386: Improve query-cpu-model-expansion full mode i386: Implement query-cpu-model-expansion QMP command i386: Define static "base" CPU model i386: Don't set CPUClass::cpu_def on "max" model i386: Make "max" model not use any host CPUID info on TCG i386: Create "max" CPU model qapi-schema: Comment about full expansion of non-migration-safe models i386: Reorganize and document CPUID initialization steps i386: Rename X86CPU::host_features to X86CPU::max_features i386: Add ordering field to CPUClass i386: Unset cannot_destroy_with_object_finalize_yet on "host" model Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-02target/sparc: Restore ldstub of odd asisRichard Henderson
Fixes the booting of ss20 roms. Cc: qemu-stable@nongnu.org Reported-by: Michael Russo <mike@papersolve.com> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-03-01Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20170228-1' into staging target-arm queue: * raspi2: add gpio controller and sdhost controller, with the wiring so the guest can switch which controller the SD card is attached to (this is sufficient to get raspbian kernels to boot) * GICv3: support state save/restore from KVM * update Linux headers to 4.11 * refactor and QOMify the ARMv7M container object # gpg: Signature made Tue 28 Feb 2017 17:11:49 GMT # gpg: using RSA key 0x3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20170228-1: (21 commits) bcm2835: add sdhost and gpio controllers bcm2835_gpio: add bcm2835 gpio controller hw/sd: add card-reparenting function qdev: Have qdev_set_parent_bus() handle devices already on a bus hw/intc/arm_gicv3_kvm: Reset GICv3 cpu interface registers target-arm: Add GICv3CPUState in CPUARMState struct hw/intc/arm_gicv3_kvm: Implement get/put functions hw/intc/arm_gicv3_kvm: Add ICC_SRE_EL1 register to vmstate update Linux headers to 4.11 update-linux-headers: update for 4.11 stm32f205: Rename 'nvic' local to 'armv7m' stm32f205: Create armv7m object without using armv7m_init() armv7m: Split systick out from NVIC armv7m: Don't put core v7M devices under CONFIG_STELLARIS armv7m: Make bitband device take the address space to access armv7m: Make NVIC expose a memory region rather than mapping itself armv7m: Make ARMv7M object take memory region link armv7m: Use QOMified armv7m object in armv7m_init() armv7m: QOMify the armv7m container armv7m: Move NVICState struct definition into header ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-03-01Add PowerPC 32-bit guest memory dump supportMike Nawrocki
This patch extends support for the `dump-guest-memory` command to the 32-bit PowerPC architecture. It relies on the assumption that a 64-bit guest will not dump a 32-bit core file (and vice versa). [dwg: I suspect this patch won't cover all cases, in particular a 32-bit machine type on a 64-bit qemu build. However, it does strictly more than what we had before, so might as well apply as a starting point] Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: add mcrxrx instructionNikunj A Dadhania
mcrxrx: Move to CR from XER Extended Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: add ov32 flag in divide operationsNikunj A Dadhania
Add helper_div_compute_ov() in the int_helper for updating the overflow flags. For Divide Word: SO, OV, and OV32 bits reflects overflow of the 32-bit result For Divide DoubleWord: SO, OV, and OV32 bits reflects overflow of the 64-bit result Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: add ov32 flag for multiply low insnsNikunj A Dadhania
For Multiply Word: SO, OV, and OV32 bits reflects overflow of the 32-bit result For Multiply DoubleWord: SO, OV, and OV32 bits reflects overflow of the 64-bit result Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: use tcg ops for neg instructionNikunj A Dadhania
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: update overflow flags for add/subNikunj A Dadhania
* SO and OV reflects overflow of the 64-bit result in 64-bit mode and overflow of the low-order 32-bit result in 32-bit mode * OV32 reflects overflow of the low-order 32-bit independent of the mode Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: update ca32 in arithmetic substractNikunj A Dadhania
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: update ca32 in arithmetic addNikunj A Dadhania
Adds routine to compute ca32 - gen_op_arith_compute_ca32 For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA and CA32 will have same value. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: support for 32-bit carry and overflowNikunj A Dadhania
POWER ISA 3.0 adds CA32 and OV32 status in 64-bit mode. Add the flags and corresponding defines. Moreover, CA32 is updated when CA is updated and OV32 is updated when OV is updated. Arithmetic instructions: * Addition and Substractions: addic, addic., subfic, addc, subfc, adde, subfe, addme, subfme, addze, and subfze always updates CA and CA32. => CA reflects the carry out of bit 0 in 64-bit mode and out of bit 32 in 32-bit mode. => CA32 reflects the carry out of bit 32 independent of the mode. => SO and OV reflects overflow of the 64-bit result in 64-bit mode and overflow of the low-order 32-bit result in 32-bit mode => OV32 reflects overflow of the low-order 32-bit independent of the mode * Multiply Low and Divide: For mulld, divd, divde, divdu and divdeu: SO, OV, and OV32 bits reflects overflow of the 64-bit result For mullw, divw, divwe, divwu and divweu: SO, OV, and OV32 bits reflects overflow of the 32-bit result * Negate with OE=1 (nego) For 64-bit mode if the register RA contains 0x8000_0000_0000_0000, OV and OV32 are set to 1. For 32-bit mode if the register RA contains 0x8000_0000, OV and OV32 are set to 1. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: Correct SDR1 maskingDavid Gibson
SDR_64_HTABORG, which indicates the bits of the SDR1 register to use for the base of a 64-bit machine's hashed page table (HPT) isn't correct. It includes the top 46 bits of the register, but in fact the top 4 bits must be zero (according to the ISA v2.07). No actual implementation has supported close to 2^60 bytes of physical address space, so it's kind of irrelevant, but we might as well correct this. In addition, although we checked for bad size values in SDR1, we never reported an error if entirely invalid bits were set there. Add this check to ppc_store_sdr1(). Reported-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: Remove the function ppc_hash64_set_sdr1()Suraj Jitindar Singh
The function ppc_hash64_set_sdr1 basically checked the htabsize and set an error if it was too big, otherwise it just stored the value in SPR_SDR1. Given that the only function which calls ppc_hash64_set_sdr1() is ppc_store_sdr1(), why not handle the checking in ppc_store_sdr1() to avoid the extra function call. Note that ppc_store_sdr1() already stores the value in SPR_SDR1 anyway, so we were doing it twice. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> [dwg: Remove unnecessary error temporary] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: Manage external HPT via virtual hypervisorDavid Gibson
The pseries machine type implements the behaviour of a PAPR compliant hypervisor, without actually executing such a hypervisor on the virtual CPU. To do this we need some hooks in the CPU code to make hypervisor facilities get redirected to the machine instead of emulated internally. For hypercalls this is managed through the cpu->vhyp field, which points to a QOM interface with a method implementing the hypercall. For the hashed page table (HPT) - also a hypervisor resource - we use an older hack. CPUPPCState has an 'external_htab' field which when non-NULL indicates that the HPT is stored in qemu memory, rather than within the guest's address space. For consistency - and to make some future extensions easier - this merges the external HPT mechanism into the vhyp mechanism. Methods are added to vhyp for the basic operations the core hash MMU code needs: map_hptes() and unmap_hptes() for reading the HPT, store_hpte() for updating it and hpt_mask() to retrieve its size. To match this, the pseries machine now sets these vhyp fields in its existing vhyp class, rather than reaching into the cpu object to set the external_htab field. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01target/ppc: Eliminate htab_base and htab_mask variablesDavid Gibson
CPUPPCState includes fields htab_base and htab_mask which store the base address (GPA) and size (as a mask) of the guest's hashed page table (HPT). These are set when the SDR1 register is updated. Keeping these in sync with the SDR1 is actually a little bit fiddly, and probably not useful for performance, since keeping them expands the size of CPUPPCState. It also makes some upcoming changes harder to implement. This patch removes these fields, in favour of calculating them directly from the SDR1 contents when necessary. This does make a change to the behaviour of attempting to write a bad value (invalid HPT size) to the SDR1 with an mtspr instruction. Previously, the bad value would be stored in SDR1 and could be retrieved with a later mfspr, but the HPT size as used by the softmmu would be, clamped to the allowed values. Now, writing a bad value is treated as a no-op. An error message is printed in both new and old versions. I'm not sure which behaviour, if either, matches real hardware. I don't think it matters that much, since it's pretty clear that if an OS writes a bad value to SDR1, it's not going to boot. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
2017-03-01target/ppc: Cleanup HPTE accessors for 64-bit hash MMUDavid Gibson
Accesses to the hashed page table (HPT) are complicated by the fact that the HPT could be in one of three places: 1) Within guest memory - when we're emulating a full guest CPU at the hardware level (e.g. powernv, mac99, g3beige) 2) Within qemu, but outside guest memory - when we're emulating user and supervisor instructions within TCG, but instead of emulating the CPU's hypervisor mode, we just emulate a hypervisor's behaviour (pseries in TCG or KVM-PR) 3) Within the host kernel - a pseries machine using KVM-HV acceleration. Mostly accesses to the HPT are handled by KVM, but there are a few cases where qemu needs to access it via a special fd for the purpose. In order to batch accesses to the fd in case (3), we use a somewhat awkward ppc_hash64_start_access() / ppc_hash64_stop_access() pair, which for case (3) reads / releases several HPTEs from the kernel as a batch (usually a whole PTEG). For cases (1) & (2) it just returns an address value. The actual HPTE load helpers then need to interpret the returned token differently in the 3 cases. This patch keeps the same basic structure, but simplfiies the details. First start_access() / stop_access() are renamed to map_hptes() and unmap_hptes() to make their operation more obvious. Second, map_hptes() now always returns a qemu pointer, which can always be used in the same way by the load_hpte() helpers. In case (1) it comes from address_space_map() in case (2) directly from qemu's HPT buffer and in case (3) from a temporary buffer read from the KVM fd. While we're at it, make things a bit more consistent in terms of types and variable names: avoid variables named 'index' (it shadows index(3) which can lead to confusing results), use 'hwaddr ptex' for HPTE indices and uint64_t for each of the HPTE words, use ptex throughout the call stack instead of pte_offset in some places (we still need that at the bottom layer, but nowhere else). Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: SDR1 is a hypervisor resourceDavid Gibson
At present the SDR1 register - the base of the system's hashed page table (HPT) - is represented as an SPR with supervisor read and write permission. However, on CPUs which have a hypervisor mode, the SDR1 is a hypervisor only resource. Change the permission checking on the SPR to reflect this. Now that this is done, we don't need to check for an external HPT executing mtsdr1: an external HPT only applies when we're emulating the behaviour of a hypervisor, rather than modelling the CPU's hypervisor mode internally, so if we're permitted to execute mtsdr1, we don't have an external HPT. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01target/ppc: Merge cpu_ppc_set_vhyp() with cpu_ppc_set_papr()David Gibson
cpu_ppc_set_papr() sets up various aspects of CPU state for use with PAPR paravirtualized guests. However, it doesn't set the virtual hypervisor, so callers must also call cpu_ppc_set_vhyp() so that PAPR hypercalls are handled properly. This is a bit silly, so fold setting the virtual hypervisor into cpu_ppc_set_papr(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
2017-03-01target/ppc: Fix KVM-HV HPTE accessorsDavid Gibson
When a 'pseries' guest is running with KVM-HV, the guest's hashed page table (HPT) is stored within the host kernel, so it is not directly accessible to qemu. Most of the time, qemu doesn't need to access it: we're using the hardware MMU, and KVM itself implements the guest hypercalls for manipulating the HPT. However, qemu does need access to the in-KVM HPT to implement get_phys_page_debug() for the benefit of the gdbstub, and maybe for other debug operations. To allow this, 7c43bca "target-ppc: Fix page table lookup with kvm enabled" added kvmppc_hash64_read_pteg() to target/ppc/kvm.c to read in a batch of HPTEs from the KVM table. Unfortunately, there are a couple of problems with this: First, the name of the function implies it always reads a whole PTEG from the HPT, but in fact in some cases it's used to grab individual HPTEs (which ends up pulling 8 HPTEs, not aligned to a PTEG from the kernel). Second, and more importantly, the code to read the HPTEs from KVM is simply wrong, in general. The data from the fd that KVM provides is designed mostly for compact migration rather than this sort of one-off access, and so needs some decoding for this purpose. The current code will work in some cases, but if there are invalid HPTEs then it will not get sane results. This patch rewrite the HPTE reading function to have a simpler interface (just read n HPTEs into a caller provided buffer), and to correctly decode the stream from the kernel. For consistency we also clean up the similar function for altering HPTEs within KVM (introduced in c138593 "target-ppc: Update ppc_hash64_store_hpte to support updating in-kernel htab"). Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: introduce helper_update_ov_legacyNikunj A Dadhania
Removes duplicate code and will be useful for consolidating flags Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: optimize gen_write_xer()Nikunj A Dadhania
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: move cpu_[read, write]_xer to cpu.cNikunj A Dadhania
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-02-28target-arm: Add GICv3CPUState in CPUARMState structVijaya Kumar K
Add gicv3state void pointer to CPUARMState struct to store GICv3CPUState. In case of usecase like CPU reset, we need to reset GICv3CPUState of the CPU. In such scenario, this pointer becomes handy. Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Message-id: 1487850673-26455-5-git-send-email-vijay.kilari@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28Merge remote-tracking branch 'remotes/mjt/tags/trivial-patches-fetch' into ↵Peter Maydell
staging trivial patches for 2017-02-28 # gpg: Signature made Tue 28 Feb 2017 06:43:55 GMT # gpg: using RSA key 0x701B4F6B1A693E59 # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" # gpg: aka "Michael Tokarev <mjt@corpit.ru>" # gpg: aka "Michael Tokarev <mjt@debian.org>" # Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5 # Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59 * remotes/mjt/tags/trivial-patches-fetch: syscall: fixed mincore(2) not failing with ENOMEM hw/acpi/tco.c: fix tco timer stop lm32: milkymist-tmu2: fix a third integer overflow qemu-options.hx: add missing id=chr0 chardev argument in vhost-user example Update copyright year tests/prom-env: Enable the test for the sun4u machine, too cadence_gem: Remove unused parameter debug message register: fix incorrect read mask ide: remove undefined behavior in ide-test CODING_STYLE: Mention preferred comment form hw/core/register: Mark the device with cannot_instantiate_with_device_add_yet hw/core/or-irq: Mark the device with cannot_instantiate_with_device_add_yet softfloat: Use correct type in float64_to_uint64_round_to_zero() target/s390x: Fix typo Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28Merge remote-tracking branch ↵Peter Maydell
'remotes/pmaydell/tags/pull-target-arm-20170228' into staging target-arm queue: * raspi2: implement RNG module * raspi2: implement new SD card controller (but don't wire it up) * sdhci: bugfixes for block transfers * virt: fix cpu object reference leak * Add missing fp_access_check() to aarch64 crypto instructions * cputlb: Don't assume do_unassigned_access() never returns * virt: Add a user option to disallow ITS instantiation * i.MX timers: fix reset handling * ARMv7M NVIC: rewrite to fix broken priority handling and masking * exynos: Fix proper mapping of CPUs by providing real cluster ID * exynos: Fix Linux kernel division by zero for PLLs # gpg: Signature made Tue 28 Feb 2017 12:40:51 GMT # gpg: using RSA key 0x3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20170228: (27 commits) hw/arm/exynos: Fix proper mapping of CPUs by providing real cluster ID hw/arm/exynos: Fix Linux kernel division by zero for PLLs bcm2835_sdhost: add bcm2835 sdhost controller armv7m: Allow SHCSR writes to change pending and active bits armv7m: Raise correct kind of UsageFault for attempts to execute ARM code armv7m: Check exception return consistency armv7m: Extract "exception taken" code into functions armv7m: VECTCLRACTIVE and VECTRESET are UNPREDICTABLE armv7m: Simpler and faster exception start armv7m: Remove unused armv7m_nvic_acknowledge_irq() return value armv7m: Escalate exceptions to HardFault if necessary arm: gic: Remove references to NVIC armv7m: Fix condition check for taking exceptions armv7m: Rewrite NVIC to not use any GIC code armv7m: Implement reading and writing of PRIGROUP armv7m: Rename nvic_state to NVICState ARM i.MX timers: fix reset handling hw/arm/virt: Add a user option to disallow ITS instantiation cputlb: Don't assume do_unassigned_access() never returns Add missing fp_access_check() to aarch64 crypto instructions ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28Merge remote-tracking branch 'remotes/rth/tags/pull-axp-20170228' into stagingPeter Maydell
Enable MTTCG for Alpha guest # gpg: Signature made Tue 28 Feb 2017 00:43:17 GMT # gpg: using RSA key 0xAD1270CC4DD0279B # gpg: Good signature from "Richard Henderson <rth7680@gmail.com>" # gpg: aka "Richard Henderson <rth@redhat.com>" # gpg: aka "Richard Henderson <rth@twiddle.net>" # Primary key fingerprint: 9CB1 8DDA F8E8 49AD 2AFC 16A4 AD12 70CC 4DD0 279B * remotes/rth/tags/pull-axp-20170228: target/alpha: Enable MTTCG by default Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28armv7m: Raise correct kind of UsageFault for attempts to execute ARM codePeter Maydell
M profile doesn't implement ARM, and the architecturally required behaviour for attempts to execute with the Thumb bit clear is to generate a UsageFault with the CFSR INVSTATE bit set. We were incorrectly implementing this as generating an UNDEFINSTR UsageFault; fix this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Check exception return consistencyPeter Maydell
Implement the exception return consistency checks described in the v7M pseudocode ExceptionReturn(). Inspired by a patch from Michael Davidsaver's series, but this is a reimplementation from scratch based on the ARM ARM pseudocode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Extract "exception taken" code into functionsPeter Maydell
Extract the code from the tail end of arm_v7m_do_interrupt() which enters the exception handler into a pair of utility functions v7m_exception_taken() and v7m_push_stack(), which correspond roughly to the pseudocode PushStack() and ExceptionTaken(). This also requires us to move the arm_v7m_load_vector() utility routine up so we can call it. Handling illegal exception returns has some cases where we want to take a UsageFault either on an existing stack frame or with a new stack frame but with a specific LR value, so we want to be able to call these without having to go via arm_v7m_cpu_do_interrupt(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Simpler and faster exception startMichael Davidsaver
All the places in armv7m_cpu_do_interrupt() which pend an exception in the NVIC are doing so for synchronous exceptions. We know that we will always take some exception in this case, so we can just acknowledge it immediately, rather than returning and then immediately being called again because the NVIC has raised its outbound IRQ line. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> [PMM: tweaked commit message; added DEBUG to the set of exceptions we handle immediately, since it is synchronous when it results from the BKPT instruction] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Remove unused armv7m_nvic_acknowledge_irq() return valuePeter Maydell
Having armv7m_nvic_acknowledge_irq() return the new value of env->v7m.exception and its one caller assign the return value back to env->v7m.exception is pointless. Just make the return type void instead. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Escalate exceptions to HardFault if necessaryMichael Davidsaver
The v7M exception architecture requires that if a synchronous exception cannot be taken immediately (because it is disabled or at too low a priority) then it should be escalated to HardFault (and the HardFault exception is then taken). Implement this escalation logic. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> [PMM: extracted from another patch] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Fix condition check for taking exceptionsPeter Maydell
The M profile condition for when we can take a pending exception or interrupt is not the same as that for A/R profile. The code originally copied from the A/R profile version of the cpu_exec_interrupt function only worked by chance for the very simple case of exceptions being masked by PRIMASK. Replace it with a call to a function in the NVIC code that correctly compares the priority of the pending exception against the current execution priority of the CPU. [Michael Davidsaver's patchset had a patch to do something similar but the implementation ended up being a rewrite.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28Add missing fp_access_check() to aarch64 crypto instructionsNick Reilly
The aarch64 crypto instructions for AES and SHA are missing the check for if the FPU is enabled. Signed-off-by: Nick Reilly <nreilly@blackberry.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-28target/s390x: Fix typoStefan Weil
Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-02-28target/alpha: Enable MTTCG by defaultRichard Henderson
Alpha has a weak memory ordering and issues all of the required barriers. Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-02-27linux-user: Add signal handling support for x86_64Pranith Kumar
Note that x86_64 has only _rt signal handlers. This implementation attempts to share code with the x86_32 implementation. CC: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Allan Wirth <awirth@akamai.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20170226165345.8757-1-bobby.prani@gmail.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>