aboutsummaryrefslogtreecommitdiff
path: root/target-ppc/mmu-hash64.c
AgeCommit message (Collapse)Author
2016-09-23target-ppc: tlbie/tlbivax should have global effectNikunj A Dadhania
tlbie (BookS) and tlbivax (BookE) plus the H_CALLs(pseries) should have a global effect. Introduces TLB_NEED_GLOBAL_FLUSH flag. During lazy tlb flush, after taking care of pending local flushes, check broadcast flush(at context synchronizing event ptesync/tlbsync, etc) is needed. Depending on the bitmask state of the tlb_need_flush, tlb is flushed from other cpus if needed and the flags are cleared. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [dwg: Use 'true' instead of '1' for call to check_tlb_flush()] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-23target-ppc: add TLB_NEED_LOCAL_FLUSH flagNikunj A Dadhania
Introduces bit-flag in CPUPPCState::tlb_need_flush: TLB_NEED_LOCAL_FLUSH (0x1) - Flush local tlb This would indicate a pending local tlb flush (isync instructions, interrupts, ...) Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-09-07ppc: Fix source NIP on SLB related interruptsBenjamin Herrenschmidt
We need to pass it to the raise helper since we don't update it before the calls. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-18target-ppc: fix left shift overflow in hpte_page_shiftPaolo Bonzini
ps->pte_enc is a 32-bit value, which is shifted left and then compared to a 64-bit value. It needs a cast before the shift. Reported by Coverity. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-18ppc/mmu-hash64: Remove duplicated #include statementThomas Huth
No need to include error-report.h twice here. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-05ppc/hash64: Fix support for LPCR:ISLBenjamin Herrenschmidt
We need to ignore the segment page size and essentially treat all pages as coming from a 4K segment. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [dwg: Adjusted for differences in my version of the prereq patches] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-05ppc/hash64: Add proper real mode translation supportBenjamin Herrenschmidt
This adds proper support for translating real mode addresses based on the combination of HV and LPCR bits. This handles HRMOR offset for hypervisor real mode, and both RMA and VRMA modes for guest real mode. PAPR mode adjusts the offsets appropriately to match the RMA used in TCG, but we need to limit to the max supported by the implementation (16G). This includes some fixes by Cédric Le Goater <clg@kaod.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [dwg: Adjusted for differences in my version of the prereq patches] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-05target-ppc: Return page shift from PTEG searchDavid Gibson
ppc_hash64_pteg_search() now decodes a PTEs page size encoding, which it didn't previously do. This means we're now double decoding the page size because we check it int he fault path after ppc64_hash64_htab_lookup() returns. To avoid this duplication have ppc_hash64_pteg_search() and ppc_hash64_htab_lookup() return the page size from the PTE and use that in the callers instead of decoding again. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2016-07-05target-ppc: Simplify HPTE matchingDavid Gibson
ppc_hash64_pteg_search() explicitly checks each HPTE's VALID and SECONDARY bits, then uses the HPTE64_V_COMPARE() macro to check the B field and AVPN. However, a small tweak to HPTE64_V_COMPARE() means we can check all of these bits at once with a suitable ptem value. So, consolidate all the comparisons for simplicity. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2016-07-05target-ppc: Correct page size decoding in ppc_hash64_pteg_search()David Gibson
The architecture specifies that when searching a PTEG for PTEs, entries with a page size encoding that's not valid for the current segment should be ignored, continuing the search. The current implementation does this with ppc_hash64_pte_size_decode() which is a very incomplete implementation of this check. We already have code to do a full and correct page size decode in hpte_page_shift(). This patch moves hpte_page_shift() so it can be used in ppc_hash64_pteg_search() and adjusts the latter's parameters to include a full SLBE instead of just a segment page shift. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2016-07-05ppc: simplify ppc_hash64_hpte_page_shift_noslb()Cédric Le Goater
The segment page shift parameter is never used. Let's remove it. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-01ppc: Fix 64K pages support in full emulationBenjamin Herrenschmidt
We were always advertising only 4K & 16M. Additionally the code wasn't properly matching the page size with the PTE content, which meant we could potentially hit an incorrect PTE if the guest used multiple sizes. Finally, honor the CPU capabilities when decoding the size from the SLB so we don't try to use 64K pages on 970. This still doesn't add support for MPSS (Multiple Page Sizes per Segment) Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: fixed checkpatch.pl errors commits 61a36c9b5a12 and 1114e712c998 reworked the hpte code doing insertion/removal in hw/ppc/spapr_hcall.c. The hunks modifying these areas were removed. ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-07-01ppc: Use a helper to filter writes to LPCRBenjamin Herrenschmidt
This handles filtering bits based on what is implemented by a given architecture version. We also use it to copy to LPCR some of the relevant 970 HID4 bits. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: fixed checkpatch.pl errors ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-06-23ppc: Fix generation if ISI/DSI vs. HV modeBenjamin Herrenschmidt
Under some circumstances, we need to direct ISI and DSI interrupts at the hypervisor, turning them into HISI/HDSI, and using different SPRs (HDSISR and HDAR) depending on the combination of MSR_DR and the corresponding VPM bits in LPCR. This moves part of the code into helpers that are fixed to select the right exception type and registers. On pre-P7 processors, LPCR is 0 which provides the old behaviour of directing the interrupts at the supervisor. Thanks to Andrei Warkentin for finding a bug when HV=1 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [clg: Merged a fix on POWERPC_EXCP_HDSI fixing the condition on msr_hv, from Andrei Warkentin <andrey.warkentin@gmail.com> ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-06-07ppc: Add missing slbfee. instruction on ppc64 BookS processorsBenjamin Herrenschmidt
Used to lookup SLB entries by address, for some reason it was missing. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-30ppc: Do some batching of TCG tlb flushesBenjamin Herrenschmidt
On ppc64 especially, we flush the tlb on any slbie or tlbie instruction. However, those instructions often come in bursts of 3 or more (context switch will favor a series of slbie's for example to an slbia if the SLB has less than a certain number of entries in it, and tlbie's can happen in a series, with PAPR, H_BULK_REMOVE can remove up to 4 entries at a time. Doing a tlb_flush() each time is a waste of time. We end up doing a memset of the whole TLB, reloading it for the next instruction, memset'ing again, etc... Those instructions don't have to take effect immediately. For slbie, they can wait for the next context synchronizing event. For tlbie, the next tlbsync. This implements batching by keeping a flag that indicates that we have a TLB in need of flushing. We check it on interrupts, rfi's, isync's and tlbsync and flush the TLB if needed. This reduces the number of tlb_flush() on a boot to a ubuntu installer first dialog screen from roughly 360K down to 36K. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: added a 'CPUPPCState *' variable in h_remove() and h_bulk_remove() ] Signed-off-by: Cédric Le Goater <clg@kaod.org> [dwg: removed spurious whitespace change, use 0/1 not true/false consistently, since tlb_need_flush has int type] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-27target-ppc: Correct KVM synchronization for ppc_hash64_set_external_hpt()David Gibson
ppc_hash64_set_external_hpt() was added in e5c0d3c "target-ppc: Add helpers for updating a CPU's SDR1 and external HPT". This helper contains a cpu_synchronize_state() since it may need to push state back to KVM afterwards. This turns out to break things when it is used in the reset path, which is the only current user. It appears that kvm_vcpu_dirty is not being set early in the reset path, so the cpu_synchronize_state() is clobbering state set up by the early part of the cpu reset path with stale state from KVM. This may require some changes to the generic cpu reset path to fix properly, but as a short term fix we can just remove the cpu_synchronize_state() from ppc_hash64_set_external_hpt(), and require any non-reset path callers to do that manually. Reported-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2016-05-19cpu: move exec-all.h inclusion out of cpu.hPaolo Bonzini
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19target-ppc: do not use target_ulong in cpu-qom.hPaolo Bonzini
Bring the PowerPCCPUClass handle_mmu_fault method type into line with the one in CPUClass. Using vaddr also makes the cpu-qom.h file target independent. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-16target-ppc: Eliminate kvmppc_kern_htab globalDavid Gibson
fa48b43 "target-ppc: Remove hack for ppc_hash64_load_hpte*() with HV KVM" purports to remove a hack in the handling of hash page tables (HPTs) managed by KVM instead of qemu. However, it actually went in the wrong direction. That patch requires anything looking for an external HPT (that is one not managed by the guest itself) to check both env->external_htab (for a qemu managed HPT) and kvmppc_kern_htab (for a KVM managed HPT). That's a problem because kvmppc_kern_htab is local to mmu-hash64.c, but some places which need to check for an external HPT are outside that, such as kvm_arch_get_registers(). The latter was subtly broken by the earlier patch such that gdbstub can no longer access memory. Basically a KVM managed HPT is much more like a qemu managed HPT than it is like a guest managed HPT, so the original "hack" was actually on the right track. This partially reverts fa48b43, so we again mark a KVM managed external HPT by putting a special but non-NULL value in env->external_htab. It then goes further, using that marker to eliminate the kvmppc_kern_htab global entirely. The ppc_hash64_set_external_hpt() helper function is extended to set that marker if passed a NULL value (if you're setting an external HPT, but don't have an actual HPT to set, the assumption is that it must be a KVM managed HPT). This also has some flow-on changes to the HPT access helpers, required by the above changes. Reported-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Tested-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
2016-03-16target-ppc: Add helpers for updating a CPU's SDR1 and external HPTDavid Gibson
When a Power cpu with 64-bit hash MMU has it's hash page table (HPT) pointer updated by a write to the SDR1 register we need to update some derived variables. Likewise, when the cpu is configured for an external HPT (one not in the guest memory space) some derived variables need to be updated. Currently the logic for this is (partially) duplicated in ppc_store_sdr1() and in spapr_cpu_reset(). In future we're going to need it in some other places, so make some common helpers for this update. In addition the new ppc_hash64_set_external_hpt() helper also updates SDR1 in KVM - it's not updated by the normal runtime KVM <-> qemu CPU synchronization. In a sense this belongs logically in the ppc_hash64_set_sdr1() helper, but that is called from kvm_arch_get_registers() so can't itself call cpu_synchronize_state() without infinite recursion. In practice this doesn't matter because the only other caller is TCG specific. Currently there aren't situations where updating SDR1 at runtime in KVM matters, but there are going to be in future. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com>
2016-02-03log: do not unnecessarily include qom/cpu.hPaolo Bonzini
Split the bits that require it to exec/log.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-id: 1452174932-28657-8-git-send-email-den@openvz.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-01-30target-ppc: Helper to determine page size information from hpte aloneDavid Gibson
h_enter() in the spapr code needs to know the page size of the HPTE it's about to insert. Unlike other paths that do this, it doesn't have access to the SLB, so at the moment it determines this with some open-coded tests which assume POWER7 or POWER8 page size encodings. To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to determine both the "base" page size per segment, and the individual effective page size from an HPTE alone. This means that the spapr code should now be able to handle any page size listed in the env->sps table. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUsDavid Gibson
When HPTEs are removed or modified by hypercalls on spapr, we need to invalidate the relevant pages in the qemu TLB. Currently we do that by doing some complicated calculations to work out the right encoding for the tlbie instruction, then passing that to ppc_tlb_invalidate_one()... which totally ignores the argument and flushes the whole tlb. Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30target-ppc: Use actual page size encodings from HPTEDavid Gibson
At present the 64-bit hash MMU code uses information from the SLB to determine the page size of a translation. We do need that information to correctly look up the hash table. However the MMU also allows a possibly larger page size to be encoded into the HPTE itself, which is used to populate the TLB. At present qemu doesn't check that, and so doesn't support the MPSS "Multiple Page Size per Segment" feature. This makes a start on allowing this, by adding an hpte_page_shift() function which looks up the page size of an HPTE. We use this to validate page sizes encodings on faults, and populate the qemu TLB with larger page sizes when appropriate. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30target-ppc: Rework SLB page size lookupDavid Gibson
Currently, the ppc_hash64_page_shift() function looks up a page size based on information in an SLB entry. It open codes the bit translation for existing CPUs, however different CPU models can have different SLB encodings. We already store those in the 'sps' table in CPUPPCState, but we don't currently enforce that that actually matches the logic in ppc_hash64_page_shift. This patch reworks lookup of page size from SLB in several ways: * ppc_store_slb() will now fail (triggering an illegal instruction exception) if given a bad SLB page size encoding * On success ppc_store_slb() stores a pointer to the relevant entry in the page size table in the SLB entry. This is looked up directly from the published table of page size encodings, so can't get out ot sync. * ppc_hash64_htab_lookup() and others now use this precached page size information rather than decoding the SLB values * Now that callers have easy access to the page_shift, ppc_hash64_pte_raddr() amounts to just a deposit64(), so remove it and have the callers use deposit64() directly. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30target-ppc: Rework ppc_store_slbDavid Gibson
ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs. Currently it takes two parameters, which contain values encoded as the register arguments to the slbmte instruction, one register contains the ESID portion of the SLBE and also the slot number, the other contains the VSID portion of the SLBE. We're shortly going to want to do some SLB updates from other code where it is more convenient to supply the slot number and ESID separately, so rework this function and its callers to work this way. As a bonus, this slightly simplifies the emulation of segment registers for when running a 32-bit OS on a 64-bit CPU. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-30target-ppc: Convert mmu-hash{32,64}.[ch] from CPUPPCState to PowerPCCPUDavid Gibson
Like a lot of places these files include a mixture of functions taking both the older CPUPPCState *env and newer PowerPCCPU *cpu. Move a step closer to cleaning this up by standardizing on PowerPCCPU, except for the helper_* functions which are called with the CPUPPCState * from tcg. Callers and some related functions are updated as well, the boundaries of what's changed here are a bit arbitrary. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Alexander Graf <agraf@suse.de>
2016-01-29ppc: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-6-git-send-email-peter.maydell@linaro.org
2015-12-17ppc: cleanup loggingPaolo Bonzini
Avoid "naked" qemu_log, bring documentation for DEBUG #defines up to date. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-09target-ppc: Fix warnings from SparseStefan Weil
Sparse report: target-ppc/mmu-hash64.c:353:9: warning: returning void-valued expression target-ppc/mmu-hash64.c:620:9: warning: returning void-valued expression Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-03-09target-ppc: Use right page size with hash table lookupAneesh Kumar K.V
We look at two sizes specified in ISA (4K, 64K). If not found matching, we consider it 16MB. Without this patch we would fail to lookup address above 16MB range. Below 16MB happened to work before because the kernel have a liner mapping and we always looked up hash for 0xc000000000000000. The actual real address was computed by using the 16MB offset with the real address found with the above hash. Without Fix: (gdb) x/16x 0xc000000001000000 0xc000000001000000 <list_entries+453208>: Cannot access memory at address 0xc000000001000000 (gdb) With Fix: (gdb) x/16x 0xc000000001000000 0xc000000001000000 <list_entries+453208>: 0x00000000 0x00000000 0x00000000 0x00000000 0xc000000001000010 <list_entries+453224>: 0x00000000 0x00000000 0x00000000 0x00000000 0xc000000001000020 <list_entries+453240>: 0x00000000 0x00000000 0x00000000 0x00000000 0xc000000001000030 <list_entries+453256>: 0x00000000 0x00000000 0x00000000 0x00000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-12-16qemu-log: add log category for MMU infoAntony Pavlov
Running barebox on qemu-system-mips* with '-d unimp' overloads stderr by very very many mips_cpu_handle_mmu_fault() messages: mips_cpu_handle_mmu_fault address=b80003fd ret 0 physical 00000000180003fd prot 3 mips_cpu_handle_mmu_fault address=a0800884 ret 0 physical 0000000000800884 prot 3 mips_cpu_handle_mmu_fault pc a080cd80 ad b80003fd rw 0 mmu_idx 0 So it's very difficult to find LOG_UNIMP message. The mips_cpu_handle_mmu_fault() messages appear on enabling ANY logging! It's not very handy. Adding separate log category for *_cpu_handle_mmu_fault() logging fixes the problem. Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com> Acked-by: Alexander Graf <agraf@suse.de> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1418489298-1184-1-git-send-email-antonynpavlov@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-05-28tcg: Invert the inclusion of helper.hRichard Henderson
Rather than include helper.h with N values of GEN_HELPER, include a secondary file that sets up the macros to include helper.h. This minimizes the files that must be rebuilt when changing the macros for file N. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-13cputlb: Change tlb_set_page() argument to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cputlb: Change tlb_flush() argument to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13target-ppc: Use PowerPCCPU in PowerPCCPUClass::handle_mmu_fault hookAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move exception_index field from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13target-ppc: Clean up ENV_GET_CPU() usageAndreas Färber
Commits fdfba1a298ae26dd44bcfdb0429314139a0bc55a, ab1da85791340e504d10487e1add81b9988afa98, f606604f1c10b60ef294f1b9b229426521a365e3 and 2c17449b3022ca9623c4a7e2a504a4150ac4ad30 added usages of ENV_GET_CPU() macro in target-specific code. Use ppc_env_get_cpu() instead. Cc: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-05target-ppc: Update ppc_hash64_store_hpte to support updating in-kernel htabAneesh Kumar K.V
This support updating htab managed by the hypervisor. Currently we don't have any user for this feature. This actually bring the store_hpte interface in-line with the load_hpte one. We may want to use this when we want to emulate henter hcall in qemu for HV kvm. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [ folded fix for the "warn_unused_result" build break in kvmppc_hash64_write_pte(), Greg Kurz <gkurz@linux.vnet.ibm.com> ] Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05target-ppc: Change the hpte store APIAneesh Kumar K.V
For updating in kernel htab we need to provide both pte0 and pte1, hence update the interface to take pte0 and pte1 together Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [ ldq_phys() API change, Greg Kurz <gkurz@linux.vnet.ibm.com> ] Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05target-ppc: Fix page table lookup with kvm enabledAneesh Kumar K.V
With kvm enabled, we store the hash page table information in the hypervisor. Use ioctl to read the htab contents. Without this we get the below error when trying to read the guest address (gdb) x/10 do_fork 0xc000000000098660 <do_fork>: Cannot access memory at address 0xc000000000098660 (gdb) Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [ fixes for 32 bit build (casts!), ldq_phys() API change, Greg Kurz <gkurz@linux.vnet.ibm.com ] Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05target-ppc: Fix htab_mask calculationAneesh Kumar K.V
Correctly update the htab_mask using the return value of KVM_PPC_ALLOCATE_HTAB ioctl. Also we don't update sdr1 on GET_SREGS for HV. We check for external htab and if found true, we don't need to update sdr1 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [ fixed pte group offset computation in ppc_hash64_htab_lookup() that caused TCG to fail, Greg Kurz <gkurz@linux.vnet.ibm.com> ] Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2014-03-05mmu-hash64: fix Virtual Page Class Key ProtectionCédric Le Goater
commit f80872e21c07edd06eb343eeeefc8af404b518a6 (mmu-hash64: Implement Virtual Page Class Key Protection) added a new page protection mechanism based on page keys and the AMR register to control access. The AMR register allows or prohibits reads and/or writes on a page depending on the control bits associated to the key. A store or a load is only permitted if the associate bit is 0 (Power ISA), and not 1 as the code is currently doing. This patch modifies ppc_hash64_amr_prot() to correct the protection check. This issue was unvailed by commit ccfb53ed6360cac0d5f6f7915ca9ae7eed866412 (target-ppc: fix Authority Mask Register init value) which changed the initialisation value of the AMR register to 0. Signed-off-by: Cédric Le Goater <clg@fr.ibm.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-07-09target-ppc: Change LOG_MMU_STATE() argument to CPUStateAndreas Färber
Choose CPUState rather than PowerPCCPU since doing a CPU() cast on the macro argument would hide type mismatches. Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09log: Change log_cpu_state[_mask]() argument to CPUStateAndreas Färber
Since commit 878096eeb278a8ac1ccd6667af73e026f29b4cf5 (cpu: Turn cpu_dump_{state,statistics}() into CPUState hooks) CPUArchState is no longer needed. Add documentation and make the functions available through qemu/log.h outside NEED_CPU_H to allow use in qom/cpu.c. Moving them to qom/cpu.h was not yet possible due to convoluted include paths, so that some devices grow an implicit and unneeded dependency on qom/cpu.h for now. Acked-by: Michael Walle <michael@walle.cc> (for lm32) Reviewed-by: Richard Henderson <rth@twiddle.net> [AF: Simplified mb_cpu_do_interrupt() and do_interrupt_all() changes] Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28kvm: Change cpu_synchronize_state() argument to CPUStateAndreas Färber
Change Monitor::mon_cpu to CPUState as well. Reviewed-by: liguang <lig.fnst@cn.fujitsu.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-22mmu-hash64: Implement Virtual Page Class Key ProtectionDavid Gibson
Version 2.06 of the Power architecture describes an additional page protection mechanism. Each virtual page has a "class" (0-31) recorded in the PTE. The AMR register contains bits which can prohibit reads and/or writes on a class by class basis. Interestingly, the AMR is userspace readable and writable, however user mode writes are masked by the contents of the UAMOR which is privileged. This patch implements this protection mechanism, along with the AMR and UAMOR SPRs. The architecture also specifies a hypervisor-privileged AMOR register which masks user and supervisor writes to the AMR and UAMOR. We leave this out for now, since we don't at present model hypervisor mode correctly in any case. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [agraf: fix 32-bit hosts] Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Merge translate and fault handling functionsDavid Gibson
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of ppc_hash{32,64{_translate(), so this patch combines them together. This means that instead of one returning a variety of non-obvious error codes which then get translated into the various mmu exception conditions, we can just generate the exceptions as we discover problems in the translation path. This also removes the last usage of mmu_ctx_hash{32,64}. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>