aboutsummaryrefslogtreecommitdiff
path: root/target-ppc
AgeCommit message (Collapse)Author
2013-04-08hw: move headers to include/Paolo Bonzini
Many of these should be cleaned up with proper qdev-/QOM-ification. Right now there are many catch-all headers in include/hw/ARCH depending on cpu.h, and this makes it necessary to compile these files per-target. However, fixing this does not belong in these patches. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-03-22target-ppc: Use NARROW_MODE macro for tlbieRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for addressesRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for comparisonsRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use NARROW_MODE macro for branchesRichard Henderson
Removing conditional compilation in the process. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Fix add and subf carry generation in narrow modeRichard Henderson
The set of computations used in b5a73f8d8a57e940f9bbeb399a9e47897522ee9a are only valid if the current word size == target_long size. This failed to take ppc64 in 32-bit (narrow) mode into account. Add a NARROW_MODE macro to avoid conditional compilation. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Use QOM method dispatch for MMU fault handlingDavid Gibson
After previous cleanups, the many scattered checks of env->mmu_model in the ppc MMU implementation have, at least for "classic" hash MMUs been reduced (almost) to a single switch at the top of cpu_ppc_handle_mmu_fault(). An explicit switch is still a pretty ugly way of handling this though. Now that Andreas Färber's CPU QOM cleanups for ppc have gone in, it's quite straightforward to instead make the handle_mmu_fault function a QOM method on the CPU object. This patch implements such a scheme, initializing the method pointer at the same time as the mmu_model variable. We need to keep the latter around for now, because of the MMU types (BookE, 4xx, et al) which haven't been converted to the new scheme yet, and also for a few other uses. It would be good to clean those up eventually. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Move ppc tlb_fill implementation into mmu_helper.cDavid Gibson
For softmmu builds the interface from the generic code to the target specific MMU implementation is through the tlb_fill() function. For ppc this is currently in mem_helper.c, whereas it would make more sense in mmu_helper.c. This patch moves it, which also allows cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Split user only code out of mmu_helper.cDavid Gibson
mmu_helper.c is, for obvious reasons, almost entirely concerned with softmmu builds of qemu. However, it does contain one stub function which is used when CONFIG_USER_ONLY=y - the user only versoin of cpu_ppc_handle_mmu_fault, which always triggers an exception. The entire rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY). We clean this up by moving the user only stub into its own new file, removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU is set. This also lets us remove the #define of cpu_handle_mmu_fault to cpu_ppc_handle_mmu_fault - that name is only used from generic code for user only - so we just name our split user version by the generic name. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash64: Implement Virtual Page Class Key ProtectionDavid Gibson
Version 2.06 of the Power architecture describes an additional page protection mechanism. Each virtual page has a "class" (0-31) recorded in the PTE. The AMR register contains bits which can prohibit reads and/or writes on a class by class basis. Interestingly, the AMR is userspace readable and writable, however user mode writes are masked by the contents of the UAMOR which is privileged. This patch implements this protection mechanism, along with the AMR and UAMOR SPRs. The architecture also specifies a hypervisor-privileged AMOR register which masks user and supervisor writes to the AMR and UAMOR. We leave this out for now, since we don't at present model hypervisor mode correctly in any case. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [agraf: fix 32-bit hosts] Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Merge translate and fault handling functionsDavid Gibson
ppc_hash{32,64}_handle_mmu_fault() is now the only caller of ppc_hash{32,64{_translate(), so this patch combines them together. This means that instead of one returning a variety of non-obvious error codes which then get translated into the various mmu exception conditions, we can just generate the exceptions as we discover problems in the translation path. This also removes the last usage of mmu_ctx_hash{32,64}. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Don't use full ppc_hash{32, 64}_translate() path for ↵David Gibson
get_phys_page_debug() Currently the hash mmu versionsof get_phys_page_debug() use the same ppc64_hash64_translate() function to do the translation logic as the normal mm fault handler code. That sounds like a good idea, but has some complications. The debug path doesn't need, or even want some parts of the full translation path, like permissions checking. Furthermore, the pte flags update included in the normal path means that the debug call is not quite side effect free. This patch, therefore, reimplements get_phys_page_debug as the minimal required subset of the full translation path. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>`z Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Correctly mask RPN from hash PTEDavid Gibson
BEHAVIOUR CHANGE At present we take the whole of word 1 of the hash PTE as the real page number used to calculate the translated address. This is incorrect, because it leaves the flags from the low bits of PTE word 1 in place in the rpm. We mostly get away with that because the value is later masked by TARGET_PAGE_MASK. More recent 64-bit CPUs also have a small number of flag bits (PP0 and KEY) in the top bits of PTE word 1. Any guest which used those bits would fail with the current code. This patch fixes the problem by correctly masking out the RPN field of PTE word 1. This is safe, even for older CPUs which didn't have PP0 and KEY, because although the RPN notionally extended to the very top of PTE word 1, none of those CPUs actually implemented that many real address bits. We add analogous masking to the 32-bit code, even though it also doesn't have the high flag bits, for consistency and clarity. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Clean up real address calculationDavid Gibson
More recent 64-bit hash MMUs support multiple page sizes, and PTEs for large pages only include the offset of the whole large page. But the qemu tlb only handles pages of the base size (4k) so we need to break up the large pages into 4k pieces for the qemu tlb. To do that we have a somewhat awkward piece of code that adds the folds address bits 4k and the page size from the virtual address into the real address from the pte. This patch simplifies this redefining the raddr output of ppc_hash64_translate() to be the full real address of the faulting address, rather than just the (4k) page offset. Computing that turns out to be simpler, and is fine for the caller, since it already masks with TARGET_PAGE_MASK before inserting into the qemu tlb. The multiple page size complication doesn't exist for 32-bit hash mmus, but we make an analogous cleanup there for consistency. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Clean up PTE flags updateDavid Gibson
Currently the ppc_hash{32,64}_pte_update_flags() helper functions update a PTE's referenced and changed bits as necessary to reflect the access. It is somewhat long winded, though. This patch open codes them in their (single) callers, in a simpler way. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash64: Factor SLB N bit into permissions bitsDavid Gibson
BEHAVIOUR CHANGE Currently, for 64-bit hash mmu, the execute protection bit placed into the qemu tlb is based only on the N (No execute) bit from the PTE. However, No Execute can also be set at the segment level. We do check this on execute faults, but this still means we could incorrectly allow execution of code from a No Execute segment, if a prior read or write fault caused the page to be loaded into the qemu tlb with PROT_EXEC set. To correct this, we (re-)check the segment level no execute permission when generating the protection bits for the qemu tlb. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Clean up permission checkingDavid Gibson
Currently checking of PTE permission bits is split messily amongst ppc_hash{32,64}_pp_check(), ppc_hash{32,64}_check_prot() and their callers. This patch cleans this up to have the new function ppc_hash{32,64}_pte_prot() compute the page permissions from the SLBE (for 64-bit) or segment register (32-bit) and the pte. A greatly simplified version of the actual permissions check is then open coded in the callers. The 32-bit version of ppc_hash32_pte_prot() is implemented in terms of ppc_hash32_pp_prot(), a renamed and slightly cleaned up version of the old ppc_hash32_pp_check(), which is also used for checking BAT permissions on the 601. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Remove nx from context structureDavid Gibson
Previous cleanups have meant the nx field of the mmu_ctx_hash32 structure is now only used within ppc_hash32_translate(), and so it can be replaced by a local variable. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Don't update PTE flags when permission is deniedDavid Gibson
BEHAVIOUR CHANGE Currently if ppc_hash{32,64}_translate() finds a PTE matching the given virtual address, it will always update the PTE's R & C (Referenced and Changed) bits. This happens even if the PTE's permissions mean we are about to deny the translation. This is clearly a bug, although we get away with it because: a) It will only incorrectly set, never reset the bits, which should not cause guest correctness problems. b) Linux guests never use the R & C bits anyway. This patch fixes the behaviour, only updating R & C when access is granted by the PTE. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Don't look up page tables on BAT permission errorDavid Gibson
BEHAVIOUR CHANGE Currently, on any failure translating an address with BATs, we proceed to normal segment and page table translation. That's incorrect if the BAT error was due to permissions, rather than not finding a matching BAT. We've gotten away with it because a guest would not usually put translations for the same address in both BATs and page table. Nonetheless this patch corrects the logic, only doing page table lookup if no BAT is found. A matching BAT with bad permissions will now correctly trigger an exception. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Cleanup BAT lookupDavid Gibson
This patch makes a general cleanup of the ppc_hash32_get_bat() function, renaming it to ppc_hash32_bat_lookup(). In particular, the new function only looks for a matching BAT, with the permissions check from the old function moved to the caller. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Clean up BAT matching logicDavid Gibson
The code to search for a matching BAT for a virtual address is somewhat longwinded and awkward. In particular, it relies on seperate size and validity information being returned from the hash32_bat_size() function (and 601 specific variant). We simplify this by having hash32_bat_size() return instead a mask of the virtual address bits to match, and 0 for invalid (since a BAT can never match the entire address space). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Split BAT size logic from permissions logicDavid Gibson
hash32_bat_size_prot() and its 601 variant, as the name suggests, returns both a BAT's size - needed to search for a matching BAT - and its permissions, only relevant once a matching BAT has been located. There's no particular advantage to combining these, so we split these roles into seperate functions for clarity. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Remove odd pointer usage from BAT codeDavid Gibson
In the code for handling BATs, the hash32_bat_size_prot() and hash32_bat_601_size_prot() functions are passed the BAT contents by reference (pointer) for no clear reason, since they only need the values within. This patch removes this odd usage, and uses the resulting change to clean up the caller slightly. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Fold pte_check*() logic into callerDavid Gibson
With previous cleanups made, the 32-bit and 64-bit pte_check*() functions are pretty trivial and only have one call site. This patch therefore clarifies the overall code flow by folding those functions into their call site. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash64: Clean up ppc_hash64_htab_lookup()David Gibson
This patch makes a general cleanup of the address mangling logic in ppc_hash64_htab_lookup(). In particular it now avoids repeatedly switching on the segment size. The lack of SLB and multiple segment sizes on 32-bit means an analogous cleanup is not needed there. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Remove permission checking from find_pte{32, 64}()David Gibson
find_pte{32,64}() are poorly named, since they both find a PTE and do permissions checking of it. This patch makes them only locate a matching PTE, moving the permission checking and other logic to the caller. We rename the resulting search functions ppc_hash{32,64}_htab_lookup(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Make find_pte{32, 64} do more of the job of finding ptesDavid Gibson
find_pte{32,64}() are not particularly well named. They only "find" a PTE within a given PTE group, and they also do permissions checking and other things. This patch makes it somewhat close to matching the name, by folding the search of both primary and secondary hash bucket into it, along with the various address bit shuffling to determine the right hash buckets. In the 32-bit case we also remove the code for splitting large pages into 4k pieces for the qemu tlb, since no 32-bit hash MMUs support multiple page sizes. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Separate PTEG searching from permissions checkingDavid Gibson
find_pte{32,64{() do several things. First they search through a PTEG ooking for a PTE matching our virtual address. Then they do permissions checking and other processing on that PTE. This patch separates the search by VA out from the rest. The search is combined with the pte{32,64}_match() functions into new ppc_has{32,64}_pteg_search() functions. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Don't keep looking for PTEs after we find a matchDavid Gibson
BEHAVIOUR CHANGE The ppc hash mmu hashes each virtual address to a primary and secondary possible hash bucket (aka PTE group or PTEG) each with 8 PTEs. Then we need a linear search through the PTEs to find the correct one for the virtual address we're translating. It is a programming error for the guest to insert multiple PTEs mapping the same virtual address into a PTEG - in this case the ppc architecture says the MMU can either act as if just one was present, or give a machine check. Currently our code takes the first matching PTE in a PTEG if it finds a successful translation. But if a matching PTE is found, but permission bits don't allow the access, we keep looking through the PTEG, checking that any other matching PTEs contain an identical translation. That behaviour is perhaps not exactly wrong, but it's certainly not useful. This patch changes it to always just find the first matching PTE in a PTEG. In addition, if we get a permissions problem on the primary PTEG, we then search the secondary PTEG. This is incorrect - a permission denying PTE in the primary PTEG should not be overwritten by an access granting PTE in the secondary (although again, it would be a programming error for the guest to set up such a situation anyway). So additionally we update the code to only search the secondary PTEG if no matching PTE is found in the primary at all. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Cleanup segment-level NX checkDavid Gibson
On the ppc hash mmus, no-execute can be set at the segment level (on more recent 64-bit hash mmus it can also be set at the page level). This patch separates out this check to make it clearer what is going on, and avoiding excessive indentation of the remaining translation code. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Split direct store segment handling into a helperDavid Gibson
This further separates the unusual case handling of direct store segments from the main translation path by moving its logic into a helper function, with some tiny cleanups along the way. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash32: Split out handling of direct store segmentsDavid Gibson
At present a large chunk of ppc_hash32_translate() is taken up with an ugly if selecting between direct store segments (hardly ever used) and normal paged segments. This patch clarifies the flow of code by handling direct store segments immediately then returning, leaving the straight line code to describe the normal MMU path. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Combine ppc_hash{32, 64}_get_physical_address and get_segment{32, ↵David Gibson
64}() After previous work, ppc_hash{32,64}_get_physical_address() are almost trivial wrappers around get_segment{32,64}() which does nearly all the work of translating an address according to the hash mmu model. Therefore combine the two functions into one, under the better name of ppc_hash{32,64}_translate(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Remove eaddr field from mmu_ctx_hash{32, 64}David Gibson
The eaddr field of mmu_ctx_hash{32,64} is effectively just used to pass the effective address from get_segment{32,64}() to find_pte{32,64}(). Just pass it as a normal parameter instead. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash64: Remove nx from mmu_ctx_hash64David Gibson
The nx field in mmu_ctx_hash64 is used in two different functions. But its used for slightly different things in each place, and the value is never propagated between them. In other words, it might as well be two local variables. This patch makes it so. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Reduce use of access_typeDavid Gibson
In ppc env->access_type is updated by e.g. integer load/stores with ACCESS_INT floating point load/stores with ACCESS_FLOAT and so forth. In hash mmu fault paths it can also b set to ACCESS_CODE for instruction fetch accesses. But the only place which uses anything more of the access_type than whether it is instruction fetch or data access is the direct store segment handling. Instruction versus data access can be more simply determined from the rw value passed down from the top. This changes the code to use rw in preference to checking access_type. For the 32-bit case there is a small amount of code (for direct store segments) that still needs the full access type. Instead of passing it all the way down the stack, we retrieve it from the env structure, which is where it came anyway, before this patch. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Add hash pte load/store helpersDavid Gibson
On real hardware the ppc hash page table is stored in memory; accordingly our mmu emulation code can read a hash page table in guest memory. But, when paravirtualized under PAPR, the real hash page table is in host memory, accessible to the guest only via hypercalls. We model this by also allowing the MMU emulation code to access a specially allocated hash page table outside the guest's memory image. At present these two options are implemented with some ugly conditionals at each access point in the mmu emulation code. In the implementation of the PAPR hypercalls, we assume the external hash table. This patch cleans things up by adding helpers to load and store from the hash table for both 32-bit and 64-bit hash mmus. The 64-bit versions handle both the in-guest-memory and outside guest memory cases. The 32-bit versions only handle the in-guest-memory case since no 32-bit systems can have an external hash table at present. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22mmu-hash*: Add header file for definitionsDavid Gibson
Currently cpu.h contains a number of definitions relating to the 64-bit hash MMU. Some are used in the MMU emulation code, but some are only used in the spapr MMU management hcall implementations. This patch moves these definitions (except for a few that are needed more widely) into mmu-hash64.h header, shared between the MMU emulation code and the spapr hcall code. The MMU emulation code is also updated to actually use a number of those definitions in place of hard coded constants. Similarly, we add new analogous definitions to mmu-hash32.h and use those in place of many hard-coded constants in mmu-hash32.c Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [agraf: fix 32-bit hosts] Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: mmu_ctx_t should not be a global typeDavid Gibson
mmu_ctx_t is currently defined in cpu.h. However it is used for temporary information relating to mmu translation, and is only used in mmu_helper.c and (now) mmu-hash{32,64}.c. Furthermore it contains information which should be specific to particular MMU types. Therefore, move its definition to mmu_helper.c. mmu-hash{32,64}.c are converted to use new data types private to the relevant MMUs (identical to mmu_ctx_t for now, but that will change in future patches). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle BAT code for 32-bit hash MMUsDavid Gibson
The functions for looking up BATs (Block Address Translation - essentially a level 0 TLB) are shared between the classic 32-bit hash MMUs and the 6xx style software loaded TLB implementations. This patch splits out a copy for the 32-bit hash MMUs, to facilitate cleaning it up. The remaining version is left, but cleaned up slightly to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Don't share get_pteg_offset() between 32 and 64-bitDavid Gibson
The get_pteg_offset() helper function is currently shared between 32-bit and 64-bit hash mmus, taking a parameter for the hash pte size. In the 64-bit paths, it's only called in one place, and it's a trivial calculation. This patch, therefore, open codes it for 64-bit. The remaining version, which is used in two places is made 32-bit only and moved to mmu-hash32.c. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle hash mmu helper functionsDavid Gibson
The newly separated paths for hash mmus rely on several helper functions which are still shared with 32-bit hash mmus: pp_check(), check_prot() and pte_update_flags(). While these don't have ugly ifdefs on the mmu type, they're not very well thought out, so sharing them impedes cleaning up the hash mmu paths. For now, put near-duplicate versions into mmu-hash64.c and mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software loaded tlb implementations. The hash 32 and software loaded implementations are simplfied slightly, using the fact that no 32-bit CPUs implement the 3rd page protection bit. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle hash mmu versions of cpu_get_phys_page_debug()David Gibson
cpu_get_phys_page_debug() is a trivial wrapper around get_physical_address(). But even the signature of get_physical_address() has some things we'd like to clean up on a per-mmu basis, so this patch moves the test on mmu model out to cpu_get_phys_page_debug(), moving the version for 64-bit hash MMUs out to mmu-hash64.c and the version for 32-bit hash MMUs to mmu-hash32.c Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle hash mmu paths for cpu_ppc_handle_mmu_faultDavid Gibson
cpu_ppc_handle_mmu_fault() calls get_physical_address() (whose behaviour depends on MMU type) then, if that fails, issues an appropriate exception - which again has a number of dependencies on MMU type. This patch starts converting cpu_ppc_handle_mmu_fault() to have a single switch on MMU type, calling MMU specific fault handler functions which deal with both translation and exception delivery appropriately for the MMU type. We convert 32-bit and 64-bit hash MMUs to this new model, but the existing code is left in place for other MMU types for now. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle get_physical_address() pathsDavid Gibson
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address can either call check_physical (which has further tests for mmu type) or get_segment64. Similarly for 32-bit hash MMUs we can either call check_physucal or get_bat() and get_segment32(). This patch splits off the whole get_physical_addresss() path for hash MMUs into 32-bit and 64-bit versions, handling real mode correctly for such MMUs without going to check_physical and rechecking the mmu type. Correspondingly, the hash MMU specific paths in check_physical() are removed. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Rework get_physical_address()David Gibson
Currently get_physical_address() first checks to see if translation is enabled in the MSR, then in the translation on case switches on the mmu type. Except that for BookE MMUs, translation is always on, and so it has to switch in the "translation off" case as well and do the same thing as the translation on path for those MMUs. Plus, even translation off doesn't behave exactly the same on the various MMU types so there are further mmu type checks in the "translation off" path. As a first step to cleaning this up, this patch moves the switch on mmu type to the top level, then makes the translation on/off check just for those mmu types where it is meaningful. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle get_segment()David Gibson
The poorly named get_segment() function handles most of the address translation logic for hash-based MMUs. It has many ugly conditionals on whether the MMU is 32-bit or 64-bit. This patch splits the function into 32 and 64-bit versions, using the switch on mmu_type that's already in the caller (get_physical_address()) to select the right one. Most of the original function remains in mmu_helper.c to support the 6xx software loaded TLB implementations (cleaning those up is a project for another day). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle find_pte()David Gibson
32-bit and 64-bit hash MMU implementations currently share a find_pte function. This results in a whole bunch of ugly conditionals in the shared function, and not all that much actually shared code. This patch separates out the 32-bit and 64-bit versions, putting then in mmu-hash64.c and mmu-has32.c, and removes the conditionals from both versions. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>
2013-03-22target-ppc: Disentangle pte_check()David Gibson
Currently support for both 32-bit and 64-bit hash MMUs share an implementation of pte_check. But there are enough differences that this means the shared function has several very ugly conditionals on "is_64b". This patch cleans things up by separating out the 64-bit version (putting it into mmu-hash64.c) and the 32-bit hash version (putting it in mmu-hash32.c). Another copy remains in mmu_helper.c, which is used for the 6xx software loaded TLB paths. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexander Graf <agraf@suse.de>