Age | Commit message (Collapse) | Author |
|
Split the bits that require it to exec/log.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1452174932-28657-8-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Here is the description of the mcrfs instruction from the PowerPC Architecture
Book, Version 2.02, Book I: PowerPC User Instruction Set Architecture
(http://www.ibm.com/developerworks/systems/library/es-archguide-v2.html), found
on page 120:
The contents of FPSCR field BFA are copied to Condition Register field BF.
All exception bits copied are set to 0 in the FPSCR. If the FX bit is
copied, it is set to 0 in the FPSCR.
Special Registers Altered:
CR field BF
FX OX (if BFA=0)
UX ZX XX VXSNAN (if BFA=1)
VXISI VXIDI VXZDZ VXIMZ (if BFA=2)
VXVC (if BFA=3)
VXSOFT VXSQRT VXCVI (if BFA=5)
However, currently every bit in FPSCR field BFA is set to 0, including ones not
on that list.
This can be seen in the following simple C program:
#include <fenv.h>
#include <stdio.h>
int main(int argc, char **argv) {
int ret;
ret = fegetround();
printf("Current rounding: %d\n", ret);
ret = fesetround(FE_UPWARD);
printf("Setting to FE_UPWARD (%d): %d\n", FE_UPWARD, ret);
ret = fegetround();
printf("Current rounding: %d\n", ret);
ret = fegetround();
printf("Current rounding: %d\n", ret);
return 0;
}
which gave the output (before this commit):
Current rounding: 0
Setting to FE_UPWARD (2): 0
Current rounding: 2
Current rounding: 0
instead of (after this commit):
Current rounding: 0
Setting to FE_UPWARD (2): 0
Current rounding: 2
Current rounding: 2
The relevant disassembly is in fegetround(), which, on my system, is:
__GI___fegetround:
<+0>: mcrfs cr7, cr7
<+4>: mfcr r3
<+8>: clrldi r3, r3, 62
<+12>: blr
What happens is that, the first time fegetround() is called, FPSCR field 7 is
retrieved. However, because of the bug in mcrfs, the entirety of field 7 is set
to 0, which includes the rounding mode.
There are other issues this will fix, such as condition flags not persisting
when they should if read, and if you were to read a specific field with some
exception bits set, but no others were set in the entire register, then the
bits would be cleared correctly, but FEX/VX would not be updated to 0 as they
should be.
Signed-off-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Signed-off-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Now that the TCG and spapr code has been extended to allow (semi-)
arbitrary page encodings in the CPU's 'sps' table, we can add the many
page sizes supported by real POWER7 and POWER8 hardware that we previously
didn't support in TCG.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
h_enter() in the spapr code needs to know the page size of the HPTE it's
about to insert. Unlike other paths that do this, it doesn't have access
to the SLB, so at the moment it determines this with some open-coded
tests which assume POWER7 or POWER8 page size encodings.
To make this more flexible add ppc_hash64_hpte_page_shift_noslb() to
determine both the "base" page size per segment, and the individual
effective page size from an HPTE alone.
This means that the spapr code should now be able to handle any page size
listed in the env->sps table.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
When HPTEs are removed or modified by hypercalls on spapr, we need to
invalidate the relevant pages in the qemu TLB.
Currently we do that by doing some complicated calculations to work out the
right encoding for the tlbie instruction, then passing that to
ppc_tlb_invalidate_one()... which totally ignores the argument and flushes
the whole tlb.
Avoid that by adding a new flush-by-hpte helper in mmu-hash64.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
Currently both the tlbiva instruction (used on 44x chips) and the tlbie
instruction (used on hash MMU chips) are both handled via
ppc_tlb_invalidate_one(). This is silly, because they're invoked from
different places, and do different things.
Clean this up by separating out the tlbiva instruction into its own
handling. In fact the implementation is only a stub anyway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
ppc_tlb_invalidate_one() has a big switch handling many different MMU
types. However, most of those branches can never be reached:
It is called from 3 places: from remove_hpte() and h_protect() in
spapr_hcall.c (which always has a 64-bit hash MMU type), and from
helper_tlbie() in mmu_helper.c.
Calls to helper_tlbie() are generated from gen_tlbiel, gen_tlbiel and
gen_tlbiva. The first two are only used with the PPC_MEM_TLBIE flag,
set only with 32-bit or 64-bit hash MMU models, and gen_tlbiva() is
used only on 440 and 460 models with the BookE mmu model.
These means the exhaustive list of MMU types which may call
ppc_tlb_invalidate_one() is: POWERPC_MMU_SOFT_6xx, POWERPC_MMU_601,
POWERPC_MMU_32B, POWERPC_MMU_SOFT_74xx, POWERPC_MMU_64B, POWERPC_MMU_2_03,
POWERPC_MMU_2_06, POWERPC_MMU_2_07 and POWERPC_MMU_BOOKE.
Clean up by removing logic for all other MMU types from
ppc_tlb_invalidate_one().
This means that ppc4xx_tlb_invalidate_virt() now has no callers, or rather,
makes it obvious that it has no callers. So, we remove that function as
well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
At present the 64-bit hash MMU code uses information from the SLB to
determine the page size of a translation. We do need that information to
correctly look up the hash table. However the MMU also allows a
possibly larger page size to be encoded into the HPTE itself, which is used
to populate the TLB. At present qemu doesn't check that, and so doesn't
support the MPSS "Multiple Page Size per Segment" feature.
This makes a start on allowing this, by adding an hpte_page_shift()
function which looks up the page size of an HPTE. We use this to validate
page sizes encodings on faults, and populate the qemu TLB with larger
page sizes when appropriate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
Currently, the ppc_hash64_page_shift() function looks up a page size based
on information in an SLB entry. It open codes the bit translation for
existing CPUs, however different CPU models can have different SLB
encodings. We already store those in the 'sps' table in CPUPPCState, but
we don't currently enforce that that actually matches the logic in
ppc_hash64_page_shift.
This patch reworks lookup of page size from SLB in several ways:
* ppc_store_slb() will now fail (triggering an illegal instruction
exception) if given a bad SLB page size encoding
* On success ppc_store_slb() stores a pointer to the relevant entry in
the page size table in the SLB entry. This is looked up directly from
the published table of page size encodings, so can't get out ot sync.
* ppc_hash64_htab_lookup() and others now use this precached page size
information rather than decoding the SLB values
* Now that callers have easy access to the page_shift,
ppc_hash64_pte_raddr() amounts to just a deposit64(), so remove it and
have the callers use deposit64() directly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs.
Currently it takes two parameters, which contain values encoded as the
register arguments to the slbmte instruction, one register contains the
ESID portion of the SLBE and also the slot number, the other contains the
VSID portion of the SLBE.
We're shortly going to want to do some SLB updates from other code where
it is more convenient to supply the slot number and ESID separately, so
rework this function and its callers to work this way.
As a bonus, this slightly simplifies the emulation of segment registers for
when running a 32-bit OS on a 64-bit CPU.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
Like a lot of places these files include a mixture of functions taking
both the older CPUPPCState *env and newer PowerPCCPU *cpu. Move a step
closer to cleaning this up by standardizing on PowerPCCPU, except for the
helper_* functions which are called with the CPUPPCState * from tcg.
Callers and some related functions are updated as well, the boundaries of
what's changed here are a bit arbitrary.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
This stub function is in the !KVM ifdef in target-ppc/kvm_ppc.h. However
no such function exists on the KVM side, or is ever used.
I think this originally referenced a function which read host page size
information from /proc, for we we now use the KVM GET_SMMU_INFO extension
instead.
In any case, it has no function now, so remove it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
|
|
Add the XML and functions to get and set VSX registers.
Signed-off-by: Anton Blanchard <anton@samba.org>
(fixed little-endian guests)
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Let's reuse the ppc_maybe_bswap_register() helper, like we already do
with the general registers.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Altivec registers are 128-bit wide. They are stored in memory as two
64-bit values that must be byteswapped when the guest is little-endian.
Let's reuse the ppc_maybe_bswap_register() helper for this.
We also need to fix the ordering of the 64-bit elements according to
the target endianness, for both system and user mode.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This helper will be used to support Altivec registers in little-endian guests.
This patch does not change functionnality.
Note: I had to put the helper some lines away from the gdb_*_avr_reg()
routines to get a more readable patch.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Let's reuse the ppc_maybe_bswap_register() helper, like we already do
with the general registers.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This helper will be used to support FP, Altivec and VSX registers when
the guest is little-endian.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
On VSX capable CPUs, the 32 FP registers are mapped to the high-bits
of the 32 first VSX registers. So if you have:
VSR31 = (uint128) 0x0102030405060708090a0b0c0d0e0f00
then
FPR31 = (uint64) 0x0102030405060708
The kernel stores the VSX registers in the fp_state struct following the
host endian element ordering.
On big-endian:
fp_state.fpr[31][0] = 0x0102030405060708
fp_state.fpr[31][1] = 0x090a0b0c0d0e0f00
On little-endian:
fp_state.fpr[31][0] = 0x090a0b0c0d0e0f00
fp_state.fpr[31][1] = 0x0102030405060708
The KVM_GET_ONE_REG and KVM_SET_ONE_REG ioctls preserve this ordering, but
QEMU considers it as big-endian and always copies element [0] to the
fpr[] array and element [1] to the vsr[] array. This does not work with
little-endian hosts, and you will get:
(qemu) p $f31
0x90a0b0c0d0e0f00
instead of:
(qemu) p $f31
0x102030405060708
This patch fixes the element ordering for little-endian hosts.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Current ppc_set_compat() returns -1 for errors, and also (unconditionally)
reports an error message. The caller in h_client_architecture_support()
may then report it again using an outdated fprintf().
Clean this up by using the modern error reporting mechanisms. Also add
strerror(errno) to the error message.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
|
|
Otherwise some internal xer variables fail to get set post-migration.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
We never released anything older than POWER8 DD2.0 and POWER8E DD2.1,
so let's use these versions, without that some firmware or Linux code
might fail to use some HW features that were non functional in earlier
internal only spins of the chip.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-6-git-send-email-peter.maydell@linaro.org
|
|
This patch provides the name of the architecture in the target.xml
if available.
This allows the remote gdb to detect the target architecture on its
own - so there is no need to specify it manually (e.g. if gdb is
started without a binary) using "set arch *arch_name*".
The name of the architecture is provided by a callback that can
be implemented by all architectures. The arm implementation has
special handling for iwmmxt and returns arm otherwise. This can
be extended if necessary.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[rework to use a callback]
Message-Id: <1449144881-130935-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
Only one of three architectures implementing qmp-dump-guest-memory write
qemu notes. And, another architecture (arm/aarch64) is coming, which
won't use them either. Make the common implementation truly common.
(No functional change.)
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1452542185-10914-3-git-send-email-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Extract code from the function kvmppc_read_int_cpu_dt() that actually
reads the file into a separate function, so it can be called from
other places.
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Avoid "naked" qemu_log, bring documentation for DEBUG #defines
up to date.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
In some cases, the same message is printed both on stderr and in the log.
Avoid duplicate output in the default case where stderr _is_ the log,
and standardize this to stderr+log where it used to use stdio+log.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Currently in TCG mode, updating floating exception
summary bit (FPSCR_FX) in fpscr also updates
the upper 32bits of fpscr with all 1s.
Modify the bit shift operation statement to use
1ULL instead.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Move the FPSCR bit update macros defined in dfp_helper
to cpu.h. This way, fpu_helper functions can also use them
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
At the moment get_monitor_def() returns only registers from statically
defined monitor_defs array. However there is a lot of BOOK3S SPRs
which are not in the list and cannot be printed from the monitor.
This adds a new target platform hook - target_get_monitor_def().
The hook is called if a register was not found in the static
array returned by the target_monitor_defs() hook.
The hook is only defined for POWERPC, it returns registered
SPRs and fails on unregistered ones providing the user with information
on what is actually supported on the running CPU. The register value is
saved as uint64_t as it is the biggest supported register size;
target_ulong cannot be used because of the stub - it is in a "common"
code and cannot include "cpu.h", etc; this is also why the hook prototype
is redefined in the stub instead of being included from some header.
This replaces static descriptors for GPRs, FPRs, SRs with a helper which
looks for a value in a corresponding array in the CPUPPCState.
The immediate effect is that all 32 SRs can be printed now (instead of 16);
later this can be reused for VSX or TM registers.
This replaces callbacks for MSR and XER with static descriptors in
monitor_defs as they are stored in CPUPPCState.
While we are here, this adds "cr" as a synonym of "ccr".
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
The lswx instruction checks whether the desired string actually fits
into all defined registers. Unfortunately it does the calculation wrong,
resulting in illegal instruction traps for loads that really should fit.
Fix it up, making Mac OS happier.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
According to the ISA setting the Rc bit on mtspr is undefined behavior.
Real 750 hardware simply ignores the bit and doesn't touch cr0 though.
Unfortunately, Mac OS 9 relies on this fact and executes a few mtspr
instructions (to set XER for example) with Rc set.
So let's handle the bit the same way hardware does and ignore it.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
The !CONFIG_KVM implementation of kvmppc_reset_htab() returns -1
by default. Change this to return 0 so that we fall back to user space
HTAB allocation for emulated guests.
This fixes the make check failures for ppc64 emulated target.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Commit aa4bb5875231 (ppc: Add mmu_model defines for arch 2.03 and 2.07)
removed the mmu_model definition POWERPC_MMU_2_06a which is needed by
PR KVM. Reintroduce it and also add POWERPC_MMU_2_07a.
This fixes QEMU crash (qemu: fatal: Unknown MMU model) during booting
of PR KVM guest.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Fix the index used to read the IBAT's vector which results in IBAT0..3 instead
of IBAT4..N.
The bug appeared by saving/restoring contexts including IBATs values.
Signed-off-by: Julio Guerra <julio@farjump.io>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
Some targets already had this within their logic, but make sure
it's present for all targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
LoPAPR defines a "ibm,pa-features" per-CPU device tree property which
describes extended features of the Processor Architecture.
This adds the property to the device tree. At the moment this is the
copy of what pHyp advertises except "I=1 (cache inhibited) Large Pages"
which is enabled for TCG and disabled when running under HV KVM host
with 4K system page size.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[aik: rebased, changed commit log, moved ci_large_pages initialization,
renamed pa_features arrays]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
This removes unused POWERPC_MMU_2_06a/POWERPC_MMU_2_06d.
This replaces POWERPC_MMU_64B with POWERPC_MMU_2_03 for POWER5+ to be
more explicit about the version of the PowerISA supported.
This defines POWERPC_MMU_2_07 and uses it for the POWER8 CPU family.
This will not have an immediate effect now but it will in the following
patch.
This should cause no behavioural change.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[aik: rebased, changed commit log]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
The vfio_accel parameter used when creating a new TCE table (guest IOMMU
context) has a confusing name. What it really means is whether we need the
TCE table created to be able to support VFIO devices.
VFIO is relevant, because when available we use in-kernel acceleration of
the TCE table, but that may not work with VFIO devices because updates to
the table are handled in kernel, bypass qemu and so don't hit qemu's
infrastructure for keeping the VFIO host IOMMU state in sync with the guest
IOMMU state.
Rename the parameter to "need_vfio" throughout. This is a cosmetic change,
with no impact on the logic.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
|
|
In-kernel ITS emulation on ARM64 will require to supply requester IDs.
These IDs can now be retrieved from the device pointer using new
pci_requester_id() function.
This patch adds pci_dev pointer to KVM GSI routing functions and makes
callers passing it.
x86 architecture does not use requester IDs, but hw/i386/kvm/pci-assign.c
also made passing PCI device pointer instead of NULL for consistency with
the rest of the code.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Message-Id: <ce081423ba2394a4efc30f30708fca07656bc500.1444916432.git.p.fedin@samsung.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Several devices don't survive object_unref(object_new(T)): they crash
or hang during cleanup, or they leave dangling pointers behind.
This breaks at least device-list-properties, because
qmp_device_list_properties() needs to create a device to find its
properties. Broken in commit f4eb32b "qmp: show QOM properties in
device-list-properties", v2.1. Example reproducer:
$ qemu-system-aarch64 -nodefaults -display none -machine none -S -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 2}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "device-list-properties", "arguments": { "typename": "pxa2xx-pcmcia" } }
qemu-system-aarch64: /home/armbru/work/qemu/memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
Aborted (core dumped)
[Exit 134 (SIGABRT)]
Unfortunately, I can't fix the problems in these devices right now.
Instead, add DeviceClass member cannot_destroy_with_object_finalize_yet
to mark them:
* Hang during cleanup (didn't debug, so I can't say why):
"realview_pci", "versatile_pci".
* Dangling pointer in cpus: most CPUs, plus "allwinner-a10", "digic",
"fsl,imx25", "fsl,imx31", "xlnx,zynqmp", because they create such
CPUs
* Assert kvm_enabled(): "host-x86_64-cpu", host-i386-cpu",
"host-powerpc64-cpu", "host-embedded-powerpc-cpu",
"host-powerpc-cpu" (the powerpc ones can't currently reach the
assertion, because the CPUs are only registered when KVM is enabled,
but the assertion is arguably in the wrong place all the same)
Make qmp_device_list_properties() fail cleanly when the device is so
marked. This improves device-list-properties from "crashes, hangs or
leaves dangling pointers behind" to "fails". Not a complete fix, just
a better-than-nothing work-around. In the above reproducer,
device-list-properties now fails with "Can't list properties of device
'pxa2xx-pcmcia'".
This also protects -device FOO,help, which uses the same machinery
since commit ef52358 "qdev-monitor: include QOM properties in -device
FOO, help output", v2.2. Example reproducer:
$ qemu-system-aarch64 -machine none -device pxa2xx-pcmcia,help
Before:
qemu-system-aarch64: .../memory.c:1307: memory_region_finalize: Assertion `((&mr->subregions)->tqh_first == ((void *)0))' failed.
After:
Can't list properties of device 'pxa2xx-pcmcia'
Cc: "Andreas Färber" <afaerber@suse.de>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Anthony Green <green@moxielogic.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Blue Swirl <blauwirbel@gmail.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Jia Liu <proljc@gmail.com>
Cc: Leon Alrae <leon.alrae@imgtec.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Walle <michael@walle.cc>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: qemu-ppc@nongnu.org
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1443689999-12182-10-git-send-email-armbru@redhat.com>
|
|
Compress lines and remove the variable.
Signed-off-by: Shraddha Barke <shraddha.6596@gmail.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Adjust all translators to respect it.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This symbol no longer exists.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
This does tidy the icount test common to all targets.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|