aboutsummaryrefslogtreecommitdiff
path: root/target/ppc
AgeCommit message (Collapse)Author
2017-07-21Use qemu_tolower() and qemu_toupper(), not tolower() and toupper()Peter Maydell
On NetBSD, where tolower() and toupper() are implemented using an array lookup, the compiler warns if you pass a plain 'char' to these functions: gdbstub.c:914:13: warning: array subscript has type 'char' This reflects the fact that toupper() and tolower() give undefined behaviour if they are passed a value that isn't a valid 'unsigned char' or EOF. We have qemu_tolower() and qemu_toupper() to avoid this problem; use them. (The use in scsi-generic.c does not trigger the warning because it passes a uint8_t; we switch it anyway, for consistency.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> for the s390 part. Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-id: 1500568290-7966-1-git-send-email-peter.maydell@linaro.org
2017-07-19tcg: Pass generic CPUState to gen_intermediate_code()Lluís Vilanova
Needed to implement a target-agnostic gen_intermediate_code() in the future. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Benneé <alex.benee@linaro.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Message-Id: <150002025498.22386.18051908483085660588.stgit@frigg.lan> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-07-19target/ppc: optimize various functions using extract opPhilippe Mathieu-Daudé
Done with the Coccinelle semantic patch scripts/coccinelle/tcg_gen_extract.cocci. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20170718045540.16322-6-f4bug@amsat.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-07-17target/ppc: fix CPU hotplug when radix is enabled (TCG)Cédric Le Goater
But when a guest initializes radix mode, it issues a H_REGISTER_PROC_TBL to update the LPCR of all CPUs. Hot-plugged CPUs inherit from the same setting under KVM but not under TCG. So, Let's check for radix and update the default LPCR to keep new CPUs in sync. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-17pseries: Allow HPT resizing with KVMDavid Gibson
So far, qemu implements the PAPR Hash Page Table (HPT) resizing extension with TCG. The same implementation will work with KVM PR, but we don't currently allow that. For KVM HV we can only implement resizing with the assistance of the host kernel, which needs a new capability and ioctl()s. This patch adds support for testing the new KVM capability and implementing the resize in terms of KVM facilities when necessary. If we're running on a kernel which doesn't have the new capability flag at all, we fall back to testing for PR vs. HV KVM using the same hack that we already use in a number of places for older kernels. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-17pseries: Implement HPT resizingDavid Gibson
This patch implements hypercalls allowing a PAPR guest to resize its own hash page table. This will eventually allow for more flexible memory hotplug. The implementation is partially asynchronous, handled in a special thread running the hpt_prepare_thread() function. The state of a pending resize is stored in SPAPR_MACHINE->pending_hpt. The H_RESIZE_HPT_PREPARE hypercall will kick off creation of a new HPT, or, if one is already in progress, monitor it for completion. If there is an existing HPT resize in progress that doesn't match the size specified in the call, it will cancel it, replacing it with a new one matching the given size. The H_RESIZE_HPT_COMMIT completes transition to a resized HPT, and can only be called successfully once H_RESIZE_HPT_PREPARE has successfully completed initialization of a new HPT. The guest must ensure that there are no concurrent accesses to the existing HPT while this is called (this effectively means stop_machine() for Linux guests). For now H_RESIZE_HPT_COMMIT goes through the whole old HPT, rehashing each HPTE into the new HPT. This can have quite high latency, but it seems to be of the order of typical migration downtime latencies for HPTs of size up to ~2GiB (which would be used in a 256GiB guest). In future we probably want to move more of the rehashing to the "prepare" phase, by having H_ENTER and other hcalls update both current and pending HPTs. That's a project for another day, but should be possible without any changes to the guest interface. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-17pseries: Stubs for HPT resizingDavid Gibson
This introduces stub implementations of the H_RESIZE_HPT_PREPARE and H_RESIZE_HPT_COMMIT hypercalls which we hope to add in a PAPR extension to allow run time resizing of a guest's hash page table. It also adds a new machine property for controlling whether this new facility is available. For now we only allow resizing with TCG, allowing it with KVM will require kernel changes as well. Finally, it adds a new string to the hypertas property in the device tree, advertising to the guest the availability of the HPT resizing hypercalls. This is a tentative suggested value, and would need to be standardized by PAPR before being merged. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
2017-07-14qdev: Add const qualifier to PropertyInfo definitionsFam Zheng
The remaining non-const ones are in e1000e which modifies description at runtime. They can be addressed separatedly. Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <20170714021509.23681-6-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-11ppc/kvm: have the "family" CPU alias to point to TYPE_HOST_POWERPC_CPUGreg Kurz
When running KVM on POWER, we allow the user to pass "-cpu POWERx" instead of "-cpu host". This is achieved by patching the ppc_cpu_aliases[] array so that "POWERx" points to the CPU class with the same PVR as the host CPU. This causes CPUs to be instantiated from this CPU class instead of the TYPE_HOST_POWERPC_CPU class which is used with "-cpu host". These CPUs thus miss all the KVM specific tuning from kvmppc_host_cpu_class_init(). This currently causes QEMU with "-cpu POWER9" to fail when running KVM on a POWER9 DD1 host: qemu-system-ppc64: Register sync failed... If you're using kvm-hv.ko, only "-cpu host" is possible kvm_init_vcpu failed: Invalid argument Let's have the "POWERx" alias to point to TYPE_HOST_POWERPC_CPU directly, so that "-cpu POWERx" instantiates CPUs from the same class as "-cpu host". Signed-off-by: Greg Kurz <groug@kaod.org> Tested-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-11target/ppc: Add debug function for radix mmu translationSuraj Jitindar Singh
In target/ppc/mmu-hash64.c there already exists the function ppc_hash64_get_phys_page_debug() to get the physical (real) address for a given effective address in hash mode. Implement the function ppc_radix64_get_phys_page_debug() to allow a real address to be obtained for a given effective address in radix mode. This is used when a debugger is attached to qemu. Previously we just had a comment saying this is unimplemented which then fell through to the default case and caused an abort due to unrecognised mmu model as the default had no case for the V3 mmu, which was misleading at best. We reuse ppc_radix64_walk_tree() which is used by the radix fault handler since the process of walking the radix tree is identical. Reported-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-11target/ppc: Refactor tcg radix mmu codeSuraj Jitindar Singh
The mmu-radix64.c file implements functions to enable the radix mmu emulation in tcg mode. There is a function ppc_radix64_walk_tree() which performs the radix tree walk and also implicitly checks the pte protection. Move the protection checking of the pte from the ppc_radix64_walk_tree() function into the caller. This means the ppc_radix64_walk_tree() function can be used without protection checking which is useful for debugging. ppc_radix64_walk_tree() no longer needs to take the rwx and prot variables. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-07-11target-ppc: SPR_BOOKE_ESR not set on FP exceptionsAaron Larson
Properly set the book E exception syndrome register when a floating point exception occurs. Currently on a book E processor, the POWERPC_EXCP_FP exception handler fails to set "env->spr[SPR_BOOKE_ESR] = ESR_FP;" as required by the book E specification. Signed-off-by: Aaron Larson <alarson@ddci.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-30target/ppc: Proper cleanup when ppc_cpu_realizefn failsBharata B Rao
If ppc_cpu_realizefn() fails after cpu_exec_realizefn() has been called, we will have to undo whatever cpu_exec_realizefn() did by explicitly calling cpu_exec_unrealizeffn() which is currently missing. Failure to do this proper cleanup will result in CPU which was never fully realized to linger on the cpus list causing SIGSEGV later (for eg when running "info cpus"). Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-30target/ppc: Fix return value in tcg radix mmu fault handlerSuraj Jitindar Singh
The mmu fault handler should return 0 if it was able to successfully handle the fault and a positive value otherwise. Currently the tcg radix mmu fault handler will return 1 after successfully handling a fault in virtual mode. This is incorrect so fix it so that it returns 0 in this case. The handler already correctly returns 0 when a fault was handled in real mode and 1 if an interrupt was generated. Fixes: d5fee0bbe68d ("target/ppc: Implement ISA V3.00 radix page fault handler") Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-30target/ppc/excp_helper: Take BQL before calling cpu_interrupt()Thomas Huth
Since the introduction of MTTCG, using the msgsnd instruction abort()s if being called without holding the BQL. So let's protect that part of the code now with qemu_mutex_lock_iothread(). Buglink: https://bugs.launchpad.net/qemu/+bug/1694998 Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-30ppc: Rework CPU compatibility testing across migrationDavid Gibson
Migrating between different CPU versions is a bit complicated for ppc. A long time ago, we ensured identical CPU versions at either end by checking the PVR had the same value. However, this breaks under KVM HV, because we always have to use the host's PVR - it's not virtualized. That would mean we couldn't migrate between hosts with different PVRs, even if the CPUs are close enough to compatible in practice (sometimes identical cores with different surrounding logic have different PVRs, so this happens in practice quite often). So, we removed the PVR check, but instead checked that several flags indicating supported instructions matched. This turns out to be a bad idea, because those instruction masks are not architected information, but essentially a TCG implementation detail. So changes to qemu internal CPU modelling can break migration - this happened between qemu-2.6 and qemu-2.7. That was addressed by 146c11f1 "target-ppc: Allow eventual removal of old migration mistakes". Now, verification of CPU compatibility across a migration basically doesn't happen. We simply ignore the PVR of the incoming migration, and hope the cpu on the destination is close enough to work. Now that we've cleaned up handling of processor compatibility modes for pseries machine type, we can do better. For new machine types (pseries-2.10+) We allow migration if: * The source and destination PVRs are for the same type of CPU, as determined by CPU class's pvr_match function OR * When the source was in a compatibility mode, and the destination CPU supports the same compatibility mode For older machine types we retain the existing behaviour - current CAS code will usually set a compat mode which would break backwards migration if we made them use the new behaviour. [Fixed from an earlier version by Greg Kurz]. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Tested-by: Andrea Bolognani <abologna@redhat.com>
2017-06-30pseries: Move CPU compatibility property to machineDavid Gibson
Server class POWER CPUs have a "compat" property, which is used to set the backwards compatibility mode for the processor. However, this only makes sense for machine types which don't give the guest access to hypervisor privilege - otherwise the compatibility level is under the guest's control. To reflect this, this removes the CPU 'compat' property and instead creates a 'max-cpu-compat' property on the pseries machine. Strictly speaking this breaks compatibility, but AFAIK the 'compat' option was never (directly) used with -device or device_add. The option was used with -cpu. So, to maintain compatibility, this patch adds a hack to the cpu option parsing to strip out any compat options supplied with -cpu and set them on the machine property instead of the now deprecated cpu property. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: Greg Kurz <groug@kaod.org> Tested-by: Greg Kurz <groug@kaod.org> Tested-by: Andrea Bolognani <abologna@redhat.com>
2017-06-28vmstate: error hint for failed equal checksHalil Pasic
In some cases a failing VMSTATE_*_EQUAL does not mean we detected a bug, but it's actually the best we can do. Especially in these cases a verbose error message is required. Let's introduce infrastructure for specifying a error hint to be used if equal check fails. Let's do this by adding a parameter to the _EQUAL macros called _err_hint. Also change all current users to pass NULL as last parameter so nothing changes for them. Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com> Message-Id: <20170623144823.42936-1-pasic@linux.vnet.ibm.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2017-06-08target/ppc: fix memory leak in kvmppc_is_mem_backend_page_size_ok()Greg Kurz
The string returned by object_property_get_str() is dynamically allocated. Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-08target/ppc: pass const string to kvmppc_is_mem_backend_page_size_ok()Greg Kurz
This function has three implementations. Two are stubs that do nothing and the third one only passes the obj_path argument to: Object *object_resolve_path(const char *path, bool *ambiguous); Signed-off-by: Greg Kurz <groug@kaod.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-06-05numa: move numa_node from CPUState into target specific classesIgor Mammedov
Move vcpu's associated numa_node field out of generic CPUState into inherited classes that actually care about cpu<->numa mapping, i.e: ARMCPU, PowerPCCPU, X86CPU. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <1496161442-96665-6-git-send-email-imammedo@redhat.com> [ehabkost: s/CPU is belonging to/CPU belongs to/ on comments] Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-05-24target/ppc: reset reservation in do_rfi()Nikunj A Dadhania
For transitioning back to userspace after the interrupt. Suggested-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Avoid printing wrong aliases in CPU help textThomas Huth
When running with KVM, we update the "family" CPU alias to point to the right host CPU type, so that it for example possible to use "-cpu POWER8" on a POWER8NVL host. However, the function for printing the list of available CPU models is called earlier than the KVM setup code, so the output of "-cpu help" is wrong in that case. Since it would be somewhat ugly anyway to have different help texts depending on whether "-enable-kvm" has been specified or not, we should better always print the same text, so fix this issue by printing "alias for preferred XXX CPU" instead. Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Allow workarounds for POWER9 DD1David Gibson
POWER9 DD1 silicon has some bugs which mean it a) isn't really compliant with the ISA v3.00 and b) require a number of special workarounds in the kernel. At the moment, qemu isn't aware of DD1. For TCG we don't really want it to be (why bother emulating buggy silicon). But with KVM, the guest does need to be aware of DD1 so it can apply the necessary workarounds. Meanwhile, the feature negotiation between qemu and the guest strongly favours architected compatibility modes to "raw" CPU modes. In combination with the above, this means the guest sees architected POWER9 mode, and doesn't apply the DD1 workarounds. Well, unless it has yet another workaround to partially ignore what qemu tells it. This patch addresses this by disabling support for compatibility modes when using KVM on a POWER9 DD1 host. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Implement ISA V3.00 radix page fault handlerSuraj Jitindar Singh
ISA V3.00 introduced a new radix mmu model. Implement the page fault handler for this so we can run a tcg guest in radix mode and perform address translation correctly. In real mode (mmu turned off) addresses are masked to remove the top 4 bits and then are subject to partition scoped translation, since we only support pseries at this stage it is only necessary to perform the masking and then we're done. In virtual mode (mmu turned on) address translation if performed as follows: 1. Use the quadrant to determine the fully qualified address. The fully qualified address is defined as the combination of the effective address, the effective logical partition id (LPID) and the effective process id (PID). Based on the quadrant (EA63:62) we set the pid and lpid like so: quadrant 0: lpid = LPIDR, pid = PIDR quadrant 1: HV only (not allowed in pseries) quadrant 2: HV only (not allowed in pseries) quadrant 3: lpid = LPIDR, pid = 0 If we can't get the fully qualified address we raise a segment interrupt. 2. Find the guest radix tree We ask the virtual hypervisor for the partition table which was registered with H_REGISTER_PROC_TBL which points us to the process table in guest memory. We then index this table by pid to get the process table entry which points us to the appropriate radix tree to translate the address. If the process table isn't big enough to contain an entry for the current pid then we raise a storage interrupt. 3. Walk the radix tree Next we walk the radix tree where each level is a table of page directory entries indexed by some number of bits from the effective address, where the number of bits is determined by the table size. We continue to walk the tree (while entries are valid and the table is of minimum size) until we reach a table of page table entries, indicated by having the leaf bit set. The appropriate pte is then checked for sufficient access permissions, the reference and change bits are updated and the real address is calculated from the real page number bits of the pte and the low bits of the effective address. If we can't find an entry or can't access the entry bacause of permissions then we raise a storage interrupt. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> [dwg: Add missing parentheses to macro] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Change tlbie invalid fields for POWER9 supportSuraj Jitindar Singh
The tlbie[l] instructions are used to invalidate TLB entries used to cache address translations. In ISAv3.00 (POWER9) more fields were added to the tblie[l] instructions which were previously invalid. We don't care about any of these new fields since we just invalidate the whole world anyway but we need to not cause an illegal instruction exception when the instructions are called. We also don't want to allow an older processor to have these fields set since that would be invalid. Add a new GEN_HANDLER for the ISAv3 instructions with the correct invalid mask. These will only be generated to a POWER9 processor for now based on the instruction flag. Also remove the PPC_MEM_TLBIE instruction flag from the POWER9 processor definition to ensure the old tlbie isn't generated. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Update tlbie to check privilege level based on GTSESuraj Jitindar Singh
The Guest Translation Shootdown Enable (GTSE) bit in the Logical Partition Control Register (LPCR) can be set to enable a guest to use the tlbie instruction directly to invalidate translations. When the GTSE bit is set then the tlbie instruction is supervisor privileged, otherwise it is hypervisor privileged. Add a guest translation shootdown enable (gtse) field to the diassembly context and use this to check the correct privilege level at code generation time. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: do not reset reserve_addr in exec_enterNikunj A Dadhania
In case when atomic operation is not supported, exit_atomic is called and we stop the world and execute the atomic operation. This results in a following call chain: tcg_gen_atomic_cmpxchg_tl() -> gen_helper_exit_atomic() -> HELPER(exit_atomic) -> cpu_loop_exit_atomic() -> EXCP_ATOMIC -> qemu_tcg_cpu_thread_fn() => case EXCP_ATOMIC -> cpu_exec_step_atomic() -> cpu_step_atomic() -> cc->cpu_exec_enter() = ppc_cpu_exec_enter() Sets env->reserve_addr = -1; But by the time it return back, the reservation is erased and the code fails, this continues forever and the lock is never taken. Instead set this in powerpc_excp() Now that ppc_cpu_exec_enter() doesn't have anything meaningful to do, let us get rid of the function. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11tcg: enable MTTCG by default for PPC64 on x86Nikunj A Dadhania
This enables the multi-threaded system emulation by default for PPC64 guests using the x86_64 TCG back-end. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Generate fence operationsNikunj A Dadhania
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-05-11target/ppc: Emulate LL/SC using cmpxchg helpersNikunj A Dadhania
Emulating LL/SC with cmpxchg is not correct, since it can suffer from the ABA problem. However, portable parallel code is written assuming only cmpxchg which means that in practice this is a viable alternative. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Style fixesDavid Gibson
This makes a small step fixing one of many style problems that exist in the older ppc code. This removes spaces between function (or macro) name and the following '('. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26e500,book3s: mfspr 259: Register mapped/aliased SPRG3 user readBernhard Kaindl
This patch registers mfspr 259 for Book3S and e500 family cores following this research: mfspr 259 provides read-only mapped user access to SPRG3(SPR 275) according to: - PowerISA 2.02, Book III (documents implementation starting with POWER4+ @ p20) - IBM PowerPC 970MP RISC Microprocessor User's Manual v2.1, page 48 - Amit Singh: "Mac OS X Internals: A Systems Approach" on 970 and 970FX cores: He demonstrates mfspr 259 reading TLS data from Mac OS X on G5 on page 588 - NXP documents it in the Core Reference Manuals of: e500, e500mc and e5500 - getcpu() of the 32 & 64-bit Book3S Linux vDSOs use it to read the core number mfspr 259 does not appear to be implemented in these cores according to: - 74xx series: MPC7410/MPC7400 and MPC7450 RISC Microprocessor Reference Manuals - 4xx series: PPC440 Processor User's Manual, Revision 1.09 by AMCC - 750 series: IBM PowerPC 750CL RISC Microprocessor User's Manual - e200 series: e200z4 Power Architectureâ Core Reference Manual Implementation: gen_spr_usprg3() is called from init_proc_book3s_common() (covers the 970 and POWER cores) and init_proc_e500() (covers the e500 family) to register spr_read_ureg() in the same way which it already provides the mapped SPR access for SPR_USPRG4-7 in gen_spr_usprgh() for cores which have the same read-only mapped SPRG register access for SPRG4-7. Verified using Linux by pinning a thread to a core and checking sched_getcpu() using qemu-system-ppc64 -M pseries -cpu POWER8 using MTTCG on a x86_64 host. Signed-off-by: Bernhard Kaindl <bernhard.kaindl@thalesgroup.com> Reviewed-by: Stefan Resch <stefan.resch@thalesgroup.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Flush TLB on write to PIDRSuraj Jitindar Singh
The PIDR (process id register) is used to store the id of the currently running process, which is used to select the process table entry used to perform address translation. This means that when we write to this register all the translations in the TLB become outdated as they are for a previously running process. Thus when this register is written to we need to invalidate the TLB entries to ensure stale entries aren't used to to perform translation for the new process, which would result in at best segfaults or alternatively just random memory being accessed. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [dwg: Fixed compile error for 32-bit targets] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Fix size of struct PPCElfPrstatusAnton Blanchard
gdb refuses to parse QEMU memory dumps because struct PPCElfPrstatus is the wrong size. Fix it. Signed-off-by: Anton Blanchard <anton@samba.org> Fixes: e62fbc54d459 ("target-ppc: dump-guest-memory support") Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26ppc/xics: introduce an 'intc' backlink under PowerPCCPUCédric Le Goater
Today, the ICPState array of the sPAPR machine is indexed with 'cpu_index' of the CPUState. This numbering of CPUs is internal to QEMU and the guest only knows about what is exposed in the device tree, that is the 'cpu_dt_id'. This is why sPAPR uses the helper xics_get_cpu_index_by_dt_id() to do the mapping in a couple of places. To provide a more generic XICS layer, we need to abstract the IRQ 'server' number and remove any assumption made on its nature. It should not be used as a 'cpu_index' for lookups like xics_cpu_setup() and xics_cpu_destroy() do. To reach that goal, we choose to introduce a generic 'intc' backlink under PowerPCCPU, and let the machine core init routine do the ICPState lookup. The resulting object is passed on to xics_cpu_setup() which does the store under PowerPCCPU. The IRQ 'server' number in XICS is now generic. sPAPR uses 'cpu_dt_id' and PowerNV will use 'PIR' number. This also has the benefit of simplifying the sPAPR hcall routines which do not need to do any ICPState lookups anymore. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Add ibm,processor-radix-AP-encodings for TCGSuraj Jitindar Singh
The ibm,processor-radix-AP-encodings device tree property of the cpu node is used to specify the radix mode supported page sizes of the processor to the guest os. Contained in the top 3 bits of the msb is the actual page size (AP) encoding associated with the corresponding radix mode supported page size. Add this property for a TCG guest, note the TCG code is capable of translating any format so just add the 4 default page sizes. The ibm,processor-radix-AP-encodings device tree property is defined as: One to n cells in ascending order of radix mode supported page sizes encoded as BE ints (32bit on ppc) in the form: 0bxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyy - 0bxxx -> AP encoding - 0byyyyyyyyyyyyyyyyyyyyyyyyyyyyy -> supported page size encoded as a shift Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target-ppc/kvm: Enable in-kernel TCE acceleration for multi-tceAlexey Kardashevskiy
This enables in-kernel handling of H_PUT_TCE_INDIRECT and H_STUFF_TCE hypercalls. The host kernel support is there since v4.6, in particular d3695aa4f452 ("KVM: PPC: Add support for multiple-TCE hcalls"). H_PUT_TCE is already accelerated and does not need any special enablement. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Implement H_REGISTER_PROCESS_TABLE H_CALLSuraj Jitindar Singh
The H_REGISTER_PROCESS_TABLE H_CALL is used by a guest to indicate to the hypervisor where in memory its process table is and how translation should be performed using this process table. Provide the implementation of this H_CALL for a guest. We first check for invalid flags, then parse the flags to determine the operation, and then check the other parameters for valid values based on the operation (register new table/deregister table/maintain registration). The process table is then stored in the appropriate location and registered with the hypervisor (if running under KVM), and the LPCR_[UPRT/GTSE] bits are updated as required. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> [dwg: Correct missing prototype and uninitialized variable] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target-ppc: support KVM_CAP_PPC_MMU_RADIX, KVM_CAP_PPC_MMU_HASH_V3Sam Bobroff
Query and cache the value of two new KVM capabilities that indicate KVM's support for new radix and hash modes of the MMU. Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26spapr: Add ibm,processor-radix-AP-encodings to the device treeSam Bobroff
Use the new ioctl, KVM_PPC_GET_RMMU_INFO, to fetch radix MMU information from KVM and present the page encodings in the device tree under ibm,processor-radix-AP-encodings. This provides page size information to the guest which is necessary for it to use radix mode. Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> [dwg: Compile fix for 32-bit targets, style nit fix] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target-ppc: kvm: make use of KVM_CREATE_SPAPR_TCE_64Alexey Kardashevskiy
KVM_CAP_SPAPR_TCE capability allows creating TCE tables in KVM which allows having in-kernel acceleration for H_PUT_TCE_xxx hypercalls. However it only supports 32bit DMA windows at zero bus offset. There is a new KVM_CAP_SPAPR_TCE_64 capability which supports 64bit window size, variable page size and bus offset. This makes use of the new capability. The kernel headers are already updated as the kernel support went in to v4.6. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-26target/ppc: Improve accuracy of guest HTM availability on P8sSam Bobroff
On Power8 hosts it is currently theoretically possible for QEMU/KVM-HV guests to receive a ibm,pa-features property indicating that HTM support is available when it is not. The situation would occur if the platform firmware of a Power8 host cleared the HTM bit of the ibm,pa-features property. QEMU would query KVM for the availability of HTM, which will return no support, but workaround code in kvm_arch_init_vcpu() would then re-enable it because KVM_HV is in use and the processor is P8. This patch adjusts the workaround in kvm_arch_init_vcpu() so that it does not enable HTM (in the above case) unless the host kernel indicates to the QEMU process, via the auxiliary vector, that userspace can use HTM (via the HWCAP2 bit KVM_FEATURE2_HTM). The reason to use the value from the auxiliary vector is that it is set based only on what the host kernel found in the ibm,pa-features HTM bit at boot time. Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-04-21ppc: remove cannot_destroy_with_object_finalize_yetLaurent Vivier
This removes the assert(kvm_enabled()) from kvmppc_host_cpu_initfn() This assert can never be triggered as the function is only registered when KVM is available (see also 4c315c2 "qdev: Protect device-list-properties against broken devices"). So we can remove the cannot_destroy_with_object_finalize_yet from kvmppc_host_cpu_class_init() without fear and beyond reproach. (as it has already be done for i386 with 771a13e "i386: Unset cannot_destroy_with_object_finalize_yet on "host" model" and e435601 "target-i386: Remove assert(kvm_enabled()) from host_x86_cpu_initfn()") Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20170414083717.13641-3-lvivier@redhat.com> Acked-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2017-03-14target/ppc: fix cpu_ov setting for 32-bitNikunj A Dadhania
A bug was introduced in following commit: dc0ad84 target/ppc: update overflow flags for add/sub As for 32-bit ppc target extracting bit 63 for overflow is not correct. Made it dependent on TARGET_LOG_BITS. This had broken booting MacOS 9.2.1 image Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2017-03-14target/ppc: Fix wrong number of UAMR registerThomas Huth
The SPR UAMR has the number 13, and not 12. (Fortunately it seems like Linux is not using this register yet - only the privileged version with number 29 ... that's why nobody noticed this problem yet) Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-06target/ppc: use helper for excp handlingNikunj A Dadhania
Use the helper routine float[32,64]_maddsub_update_excp() in VSX_MADD macro. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-06target/ppc: fmadd: add macro for updating flagsNikunj A Dadhania
Adds FPU_MADDSUB_UPDATE macro, this will be used for other routines having float32/16 Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-06target/ppc: fmadd check for excp independentlyNikunj A Dadhania
Current order of checking does not confirm with the spec (ISA 3.0: MultiplyAddDP page-469). Change the order and make them independent of each other. For example: a = infinity, b = zero, c = SNaN, this should set both VXIMZ and VXNAN Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-04Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170303' ↵Peter Maydell
into staging ppc patch queuye for 2017-03-03 This will probably be my last pull request before the hard freeze. It has some new work, but that has all been posted in draft before the soft freeze, so I think it's reasonable to include in qemu-2.9. This batch has: * A substantial amount of POWER9 work * Implements the legacy (hash) MMU for POWER9 * Some more preliminaries for implementing the POWER9 radix MMU * POWER9 has_work * Basic POWER9 compatibility mode handling * Removal of some premature tests * Some cleanups and fixes to the existing MMU code to make the POWER9 work simpler * A bugfix for TCG multiply adds on power * Allow pseries guests to access PCIe extended config space This also includes a code-motion not strictly in ppc code - moving getrampagesize() from ppc code to exec.c. This will make some future VFIO improvements easier, Paolo said it was ok to merge via my tree. # gpg: Signature made Fri 03 Mar 2017 03:20:36 GMT # gpg: using RSA key 0x6C38CACA20D9B392 # gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" # gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" # gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" # gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" # Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392 * remotes/dgibson/tags/ppc-for-2.9-20170303: target/ppc: rewrite f[n]m[add,sub] using float64_muladd spapr: Small cleanup of PPC MMU enums spapr_pci: Advertise access to PCIe extended config space target/ppc: Rework hash mmu page fault code and add defines for clarity target/ppc: Move no-execute and guarded page checking into new function target/ppc: Add execute permission checking to access authority check target/ppc: Add Instruction Authority Mask Register Check hw/ppc/spapr: Add POWER9 to pseries cpu models target/ppc/POWER9: Add cpu_has_work function for POWER9 target/ppc/POWER9: Add POWER9 pa-features definition target/ppc/POWER9: Add POWER9 mmu fault handler target/ppc: Don't gen an SDR1 on POWER9 and rework register creation target/ppc: Add patb_entry to sPAPRMachineState target/ppc/POWER9: Add POWERPC_MMU_V3 bit powernv: Don't test POWER9 CPU yet exec, kvm, target-ppc: Move getrampagesize() to common code target/ppc: Add POWER9/ISAv3.00 to compat_table Signed-off-by: Peter Maydell <peter.maydell@linaro.org>