Age | Commit message (Collapse) | Author |
|
QEMU uses a fixed page size for the CPU TLB. If the guest uses large
pages then we effectively split these into multiple smaller pages, and
populate the corresponding TLB entries on demand.
When the guest invalidates the TLB by virtual address we must invalidate
all entries covered by the large page. However the address used to
invalidate the entry may not be present in the QEMU TLB, so we do not
know which regions to clear.
Implementing a full vaiable size TLB is hard and slow, so just keep a
simple address/mask pair to record which addresses may have been mapped by
large pages. If the guest invalidates this region then flush the
whole TLB.
Signed-off-by: Paul Brook <paul@codesourcery.com>
|
|
Disable various target specific code that is only relevant to system emulation.
Signed-off-by: Paul Brook <paul@codesourcery.com>
|
|
cpu_get_phys_page_debug makes no sense for userspace emulation, so remove it.
Signed-off-by: Paul Brook <paul@codesourcery.com>
|
|
Removes a set of ifdefs from exec.c.
Introduce TARGET_VIRT_ADDR_SPACE_BITS for all targets other
than Alpha. This will be used for page_find_alloc, which is
supposed to be using virtual addresses in the first place.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
This grand cleanup drops all reset and vmsave/load related
synchronization points in favor of four(!) generic hooks:
- cpu_synchronize_all_states in qemu_savevm_state_complete
(initial sync from kernel before vmsave)
- cpu_synchronize_all_post_init in qemu_loadvm_state
(writeback after vmload)
- cpu_synchronize_all_post_init in main after machine init
- cpu_synchronize_all_post_reset in qemu_system_reset
(writeback after system reset)
These writeback points + the existing one of VCPU exec after
cpu_synchronize_state map on three levels of writeback:
- KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
- KVM_PUT_RESET_STATE (on synchronous system reset, all VCPUs stopped)
- KVM_PUT_FULL_STATE (on init or vmload, all VCPUs stopped as well)
This level is passed to the arch-specific VCPU state writing function
that will decide which concrete substates need to be written. That way,
no writer of load, save or reset functions that interact with in-kernel
KVM states will ever have to worry about synchronization again. That
also means that a lot of reasons for races, segfaults and deadlocks are
eliminated.
cpu_synchronize_state remains untouched, just as Anthony suggested. We
continue to need it before reading or writing of VCPU states that are
also tracked by in-kernel KVM subsystems.
Consequently, this patch removes many cpu_synchronize_state calls that
are now redundant, just like remaining explicit register syncs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Invalid opcode messages can be perfectly normal, for example if this
code is never executed. Don't print an error message on the console,
but keep the message in the log for debugging purposes.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
This reverts commit 6454e7be1b2504533f7ffb190d54ebe2993cb434.
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
The shifts in the gen_evsplat* functions were expecting rA to be masked,
not extracted, and so used the wrong shift amounts to sign-extend or pad
with zeroes.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
The CRF_{CH,CL,CH_OR_CL,CH_AND_CL} constants were all off by one bit
position. Because of this, the SPE evcmp* family of instructions would
store values in the result condition register that were also off by one
bit position.
Fixed by using the CRF_{LT,GT,EQ,SO} constants for the shift amounts.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
For some odd reason we sometimes hang inside KVM forever. I'd guess it's
a race condition where we actually have a level triggered interrupt, but
the infrastructure can't expose that yet, so the guest ACKs it, goes to
sleep and never gets notified that there's still an interrupt pending.
As a quick workaround, let's just wake up every 500 ms. That way we can
assure that we're always reinjecting interrupts in time.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
We were masking 1TB SLB entries on the feature bit of 16 MB pages. Obviously
that breaks, so let's just ignore 1TB SLB entries for now and instead do
16MB pages correctly.
This fixes PPC64 Linux boot with -m above 256.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Our guest systems need to know by how much the timebase increases every second,
so there usually is a "timebase-frequency" property in the cpu leaf of the
device tree.
This property is missing in OpenBIOS.
With qemu, Linux's fallback timebase speed and qemu's internal timebase speed
match up. With KVM, that is no longer true. The guest is running at the same
timebase speed as the host.
This leads to massive timing problems. On my test machine, a "sleep 2" takes
about 14 seconds with KVM enabled.
This patch exports the timebase frequency to OpenBIOS, so it can then put them
into the device tree.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The recent transition to always have the DCR helper functions take 32 bit
values broke the PPC64 target, as target_long became 64 bits there.
This patch changes DCR helpers to target_long arguments, and cast the values
to 32 bit when needed.
Fixes PPC64 build with --enable-debug-tcg
Based on a patch from Alexander Graf <agraf@suse.de>
Reported-by: Stefan Weil <weil@mail.berlios.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Raise the zone protection fault in ESR for TLB faults caused by
zone protection bits.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
|
|
The 40x MMU has 15 zones in the ZPR register.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
|
|
Bailout on 40x TLB entries with endianess swapping only if the entry
is valid.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
|
|
The ZSEL was incorrectly beeing decoded from TLBHI. Decode it from
TLBLO instead.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@gmail.com>
|
|
For what I know DCR is always 32 bits wide, so we should also use uint32_t to
pass it along the stacks.
This fixes a warning when compiling qemu-system-ppc64 with KVM enabled, making
it compile without --disable-werror
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Fix the alternate time base the same way as the default timebase. SPR_ATBL
should return a 64-bit value on 64 bit implementations.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
On PPC we have a 64-bit time base. Usually (PPC32) this is accessed using
two separate 32 bit SPR accesses to SPR_TBU and SPR_TBL.
On PPC64 the SPR_TBL register acts as 64 bit though, so we get the full
64 bits as return value. If we only take the lower ones, fine. But Linux
wants to see all 64 bits or it breaks.
This patch makes PPC64 Linux work even after TB crossed the 32-bit boundary,
which usually happened a few seconds after bootup.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
My segment sync patch broke compilation on PPC32, because it was trying to
sync the SLB even though ppc32 CPUs don't have an SLB.
So let's only sync it when we're on a PP64 one!
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
While x86 only needs to sync cr0-4 to know all about its MMU state and enable
qemu to resolve virtual to physical addresses, we need to sync all of the
segment registers on PPC to know which mapping we're in.
So let's grab the segment register contents to be able to use the "x" monitor
command and also enable the gdbstub to resolve virtual addresses.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Will be required by succeeding changes.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
No need to alias e300 core for each CPU package.
Differences between microcontrollers have to be implemented in a higher layer
than translate_init.c
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Add CPU declarations of MPC8343, MPC8343E, MPC8347 and MPC8347E.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Declare HID2 register.
Use high BATs for e300 (8 instead of 4).
Fix index of high BATs registers.
Before the fix, IBAT4-7 were overwriting IBAT0-3.
Signed-off-by: François Armand <francois.armand@os4i.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b720a0936da2052cb9a46db04ffc6db29.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Some not so obvious bits, slirp and Xen were left alone for the time
being.
Signed-off-by: malc <av1474@comtv.ru>
|
|
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Remove a temp local variable and a jump by computing a mask with shifts.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc9235840a22c289edbe064a9b3c19c5f49896,
f40d753718c72693c5f520f0d9899f6e50395e94,
96555a96d724016e13190b28cffa3bc929ac60dc and
3990d09adf4463eca200ad964cc55643c33feb50 but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
cpu_synchronize_state() is a little unreadable since the 'modified'
argument isn't self-explanatory. Simplify it by making it always
synchronize the kernel state into qemu, and automatically flush the
registers back to the kernel if they've been synchronized on this
exit.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
handle_cpu_signal is very nearly copy-paste code for each target, with a
few minor variations. This patch sets up appropriate defaults for a
generic handle_cpu_signal and provides overrides for particular targets
that did things differently. Fixing things like the persistent (XXX:
use sigsetjmp) should now become somewhat easier.
Previous comments on this patch suggest that the "activate soft MMU for
this block" comments refer to defunct functionality. I have removed
such blocks for the appropriate targets in this patch.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
We define inline as always_inline.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: malc <av1474@comtv.ru>
|
|
We do this so we can check on the corresponding stc{w,d}x. whether the
value has changed. It's a poor man's form of implementing atomic
operations and is valid only for NPTL usermode Linux emulation.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: malc <av1474@comtv.ru>
|